At Apple's annual Worldwide Developer's Conference yesterday, one announcement garnered more applause than pretty much everything else combined. It wasn't iOS 10, or the latest version of OS X (now rechristened macOS), or a hardware product. It was Swift Playgrounds: a free iPad app that aims to teach kids and adults alike to code using Apple's Swift programming language.
The new learn-to-code app takes a similar approach to teaching the basics of code as several products previously covered by Co.Design, like Osmos Coding. The earliest lessons all revolve around moving a digital creature around video game levels collecting gems using snippets of code. Later, as a user's programming skills get more advanced, they can learn more sophisticated coding techniques through interactive lessons, and even put together simple iOS themselves, which run in a split screen sandbox within the Playgrounds app.
But none of this fully explains why the 5,000 developers in attendance got so excited about Swift Playgrounds. The main reason? It opens the door for the iPad as a coding environment.
Right now, if you want to code an app for the iPhone, iPad, or Apple Watch, you can only do it on a Mac. That's a policy that has served Apple well over the years: it's no accident that Mac sales started dramatically rising when Apple first announced the iOS app store as part of iOS 2.0 (then called iPhone OS 2). And truthfully, it would have been difficult, if not impossible, for Apple to allow developers to build apps on the underpowered chips of yesteryears' iPhones or iPads, because compiling apps is computationally expensive.
These days, though, it's looking increasingly like an omission that iOS devices can't actually code iOS apps. Apple has sold over 1 billion iOS devices in the wild. Even in a weak quarter, it sells 15 to 20 times more iPhones and iPads than Macs. iPads, meanwhile, are getting more Mac-like, supporting advanced desktop functionality like the ability to run multiple apps side-by-side. Heck, the 12.9-inch iPad Pro is even as powerful as a Mac, falling in benchmarks somewhere between the MacBook Air and MacBook Pro.
Looking at these numbers, it makes little sense that developers in 2016 can't design and program on the same devices their apps are meant to run on. And as desktop and laptop sales continue to shrink, it's even more ludicrous that companies like Apple haven't opened up their mobile devices to coders. The whole world is moving towards doing literally everything on smartphones and tablets. That's the whole idea behind the iPad Pro, the first Apple tablet that could legitimately be considered a laptop replacement. It's downright silly that code is the sole exception to the "do everything on mobile" mindset.
Swift Playgrounds isn't a full coding platform. The apps you can create with it can't be shared with friends, or uploaded to the App Store. Still, the reason why the WWDC audience cheered today is because it forecasts a day in which iPad apps are designed and built on iPads, not PCs. Otherwise, why teach a whole new generation of coders on a device that will never have a professional coding environment?
IKEA's Swedish sensibility usually lends itself to cheery, colorful ads, but to advertise its new Sinnerlig collection, the flat pack mega-retailer is going noir.
The new campaign was done by commercial production agency RBG6. RBG6 shot the Sinnerlig chair and pitcher in darkened rooms to emphasize their clean geometric lines and focused on how they cast shadows when basked in high-contrast beams of light.
The result is something quintessentially Swedish: an exploration of penumbra and illumination that almost feels like it was plucked right out of an Ingmar Bergman film—albeit, something from the Swedish auteur's filmography that is less miserablist than Rites of Spring or Seven Seals. Smiles of a Summer Night, perhaps?
"Design thinking" has become the watchword for an entire generation of MBAs who think that becoming the next Steve Jobs is as easy as a whiteboard filled with disruptive ideation. "This broad definition of design thinking [practiced at many businesses] is a single-faceted cliché of what design really is, and how it can contribute to business," says Pentagram partner Michael Bierut.
And while the duo admit they don't have some "grand unified theory of design" to teach business majors, they outlined their approach to me, explaining why they think design education is increasingly important for business majors—and, well, everyone else, too.
One way Bierut and Helfand intend on differentiating their approach from other business school design curriculums is by teaching design not as a "step-by-step methodology." Rather, it's more like a second language.
Admittedly, comparing design to a language is sort of cliché. But learning design like you learn a language isn't. The goal, Bierut says, is to teach business majors "to speak about design fluidly, with equal mixtures of humility and confidence, so that it can bring them not only commercial success, but to life itself."
In other words: more humility, more empathy, more understanding, and a hell of a lot less jargon.
A major reason that Bierut and Helfand say they think design should be taught in business schools is because what passes for design thinking in most companies is actually very shallow. "In business, design has become very systemized," Helfand says. She equates some of the techniques that businesses rely on (such as whiteboard brainstorming) as hoary, plug-and-play techniques that are just surface cover for a lack of real understanding about what makes design work.
Down the line, this superficial knowledge of design can cause problems between designers and clients, who are not really speaking the same language, even though they might think they are. A better alternative, Bierut says, is bringing designers into b-school classrooms early—as a way of training their future clients with the goal of "informing a rich, multifaceted view of the way design and business can interact with each other."
It doesn't take a lot to understand why Bierut and Helfand think that design should be taught to MBAs—the fields of design and business intersect all the time. But it's not just business students who should receive a design education, they point out. It's everyone.
Design, Bierut tells me, should be taught like any other subject in a classic humanistic education. It's all about educating what Bierut calls "the whole person," not just a part of them. Whether you're a nurse, a firefighter, or a future day trader or senior VP, Bierut and Helfand think that design is something that can enrich every person, and help them be successful not just in their careers, but in their lives.
Stoplights regulate car and pedestrian traffic. Could they also be used to regulate the food we eat?
Researchers at the Perelman School of Medicine at the University of Pennsylvania conducted a study to see how calorie labels on online food ordering systems, like Grubhub or Seamless, influence customer's purchasing decisions. The study, which was recently published in the Journal of Public Policy & Marketing, showed that images of stoplights helped consumers watch their calorie intake—about as effectively as displaying calorie numbers.
Conducted over a period of six weeks, 249 participants from Carnegie Melon University were asked to order lunch from a new online-portal, resulting in 803 orders placed overall. On this portal, all meal choices were presented with calorie information right next to the menu items, either as numeric listings or as a color-coded system of traffic light icons, where a red light means high-calorie, a yellow light means medium calorie, and a green-light means low calorie.
What the researchers discovered was that presenting a menu item's nutritional information—whether a numeric value or a symbolic icon—reduced the amount of calories the average person consumed by 10%, compared to that information not being presented at all.
Although Eric M. VanEpps, the lead author of the paper, warns that "[future] studies looking at different menu types and sets of participants are necessary," he argues in the paper that the study "provides clear evidence that both calorie labeling methods can be effective when ordering meals online."
Not that food delivery companies have a choice in the matter. Starting in May 2017, the U.S. Food and Drug Administration will start requiring all restaurants, theaters, vending machines, and food delivery services to clearly label how many calories are in each item in an attempt to cut down on the alarming rise of obesity in the country. But perhaps numeric counters aren't necessary. As VanEpps's research suggests, familiar symbols can be just as effective.
Of the dozen or so unrealized projects film auteur Stanley Kubrick left behind at his death, none is as legendary as Napoleon—a historical opus about the French emperor that the director originally meant as his follow-up to 2001.
Starring Jack Nicholson as Napoleon and Audrey Hepburn as Josephine, Kubrick had already secured 50,000 real-life Romanian soldiers to help film the movie's extensive battle scenes when the financing unceremoniously dried up. Kubrick ultimately recycled much of his Napoleon research to make 1975's underrated Barry Lyndona reality, but what if he hadn't? What if Napoleon had actually been made, as well other famously unmade blockbusters?
Los Angeles-based artist Fernando Reza imagines just that with his series of alt-reality movie posters, The Ones That Got Away. The series includes some truly legendary unmade films—here are the stories behind them.
Alfred Hitchcock's Kaleidoscope was an unmade prequel to Shadow of a Doubt, which Hitchcock considered the finest movie he'd ever made. The film would have followed a "handsome and charming" young bodybuilder inspired by the real-life convicted rapist and murderer Neville Heath, who would have been the film's protagonist. Universal ultimately turned the film down because of its relentless sex and violence, but many of the ideas in Kaleidoscope were ultimately used in Hitchcock's 1972, Frenzy, another under-appreciated masterpiece.
Ronnie Rocket, David Lynch's unrealized follow-up to Eraserhead, was to star Michael J. Anderson (who went on to be Twin Peaks' The Man from Another Place) as a teenage dwarf who becomes the titular rockstar. Another subplot was to involve a detective who is so good at standing on one leg that he is capable of traversing the second dimension. Ultimately, no one wanted to finance Ronnie Rocket, so Lynch ended up making The Elephant Man instead.
This lost Dada comedy film was written by Salvador Dali for the Marx Brothers in 1937. It was to include scenes showing giraffes wearing gas-masks, which had been set on fire, and Harpo running around as an aristocrat named Jimmy, using a butterfly net to capture the 18 smallest dwarfs in the world. The film was to be scored by legendary jazz musician Cole Porter. MGM ultimately passed on the film, saying it was too surreal, even for the Marx Brothers.
At the Mountains of Madness was to be an adaptation of one of H.P. Lovecraft's weirdest pulp stories, in which unspeakable "Elder Things" are worshipped by a race of blind, six-foot-tall albino penguins in a lost Antarctic city. It was to be realized by monster-maker Guillermo del Toro of Hellboy and Pan's Labyrinth fame, until the film's similarity to Ridley Scott's Prometheus killed it off.
Other posters in the series include Orson Welles's Don Quixote and Heart of Darkness adaptations, as well as David Lean's adaptation of the Joseph Conrad book Nostromo and Stanley Kubrick's lost holocaust film, The Aryan Papers. And Reza tells me he hopes to add more over time, including Ray Harryhausen's stop-motion course on Darwinism, Evolution, and Tim Burton's infamous adaptation of Superman.
Ultimately, Reza says that he—and, by extension, us—are so fascinated by unmade Hollywood films because they can be anything we want them to be. "Just about every director has some passion project that a lot of times never comes to fruition," he tells me. "Because they don't exist, we're left to fill in the blanks. So that kind of makes them perfect films. We can project our imagination onto them."
Next-gen virtual reality reality headsets like the Oculus Rift and HTC Vive are still hobbled by their dependence upon decidedly less-gen controllers, like the Xbox One's. At this year's E3 expo, Oculus finally unveiled its own custom VR controllers, which will finally allow console cowboys to reach out and touch the virtual worlds in which they are immersed.
The controllers, called the Oculus Touch, are something between an Xbox One controller split and half and the Nintendo Wiimote. By design, they resemble a rapier-style hilt with buttons, as well as thumb and forefinger triggers. However, each remote has built-in gyroscopes, allowing the Oculus Rift to detect the motion of each remote, and translate it into in-game actions.
The video of the Oculus Touch controllers in action is doubtlessly hyperbolic, but if you look past the marketing glitz, you can get an idea of how they work. Using the Oculus Touch controllers, an Oculus player could shoot virtual hoops, play frisbee, manipulate in-game objects, or even pick up guns (of coursee).
The controllers should allow Oculus Rift owners to use a combination of gestures and button presses to simulate at least some of the full-range of moments you can accomplish with your hands. They look like a sort of half-compromise between the traditional console controllers existing VR headsets use, and the fully articulated VR gloves that Snow Crash promised us.
The question remains whether or not these controllers will really take off. They seem optional, only used for certain types of motion-controlled games, and in the history of gaming, optional proprietary controllers tend not to take off. Just ask Microsoft with the Kinect, or Sony with the PlayStation Move.
Still, if anything's a natural fit for VR, it's motion control. Virtual fingers crossed.
No one knows for sure how the megaliths of Stonehenge were erected 5,000 years ago. With each stone weighing between two and fifty tons, how were these massive weights transported from quarries as much as 200 miles away by neolithic builders—who hadn't even invented wheels or pulleys yet?
This is the kind of question that fascinates Brandon Clifford of Matter Design. Last year, he and a team of MIT students created a 2,000 pound sculpture that replicated the physics with which people of Easter Island created moved massive stones across land, centuries ago, to create the legendary Moai of Easter Island. Clifford and his students recreated the effect with a monumental sculpture that could be moved with a fingertip—thanks to engineering ideas from 1100AD.
This year, Clifford created a new kind of megalith, based on equally ancient construction techniques. Called the Buoy Stone, the 1,850 pound sculpture was inspired by theories that the megaliths of Stonehenge were transported from their quarries down England's River Avon. Designed to mark the centennial of what MIT calls the "crossing of the Charles," when the school moved from Boston's Back Bay to its current Cambridge Campus, the Buoy stone floats.
The megaliths at Stonehenge are believed to have been floated down River Avon with inflated animal bladders, so the Buoy Stone uses a similar technique to keep its mass bobbing in the Charles River. Although it appears to be solid concrete from the outside, the Buoy Stone contains a foam core, surrounding a carefully calibrated internal bladder which can accommodate up to 6,000 pounds of river water. That's enough to float the stone's nearly 2,000-pound weight indefinitely.
What makes the Buoy Stone such a striking sight isn't just the fact that it floats, but how it floats. Although the whole stone is eight feet wide and 20 feet long, it doesn't rest in the water along its longest axis, as you might expect. Instead, it bobs upright in the Charles, standing over 16 feet tall from the water line to the top of the stone. To accomplish this effect, Matter Design needed to employ some precise computational simulations of the finished megalith's buoyancy, along with some of the same techniques that shipmakers use to float tall-masted sailboats in the water upright.
Although it doesn't use the exact same techniques that some archeologists believe were used at Stonehenge, Clifford says that the Buoy Stone is meant to make people think of the performative, almost theatrical aspects of making enormous objects move. In part, the Buoy Stone recreates the bizarre spectacle of what it must have been like to see fifty-ton stones floating down a prehistoric English river. "It's all about the mystique," Clifford says. "The reason we know this is a success is because as we were floating the Buoy Stone, so many joggers and cyclists kept on stopping to gawk at it, and ask how it was possible for a stone to float that way."
Right now, the Buoy Stone is bobbing placidly near the Massachusetts Avenue bridge in Cambridge, right across from MIT's famous Killian Court. And unlike the McKnelly Megalith, Clifford tells me there's no intention on taking it down. "It's possible some authority will come along and tell us to remove it," he says, "but I'd love to see what it looks like out there in winter, when the river's been frozen over."
More than just being a spectacle, though—and like the megalith that preceded it—the Buoy Stone is an attempt to examine how knowledge from the past can influence design today. "If there's anything that megalithic builders were knowledgeable about, it's how to work with massive, heavy stones in practical ways," says Clifford.
Thanks to the ever shrinking gap between what we can simulate and what we can build in the real world, Clifford thinks the time of the megalith may be coming again.
Co.Design has partnered with the Brooklyn design studio Hyperakt to bring you Lunch Talks, a video series of conversations with smart, creative people.—Eds
Justin Gignac is the cofounder of Working Not Working, an invite-only network of the designers and creatives used by companies like Apple, Google, Airbnb, Facebook, Kickstarter, and Etsy. According to Gignac, his success as a design creative can be distilled down to one word: shamelessness.
In this recent Lunch Talk Gignac, Gignac reveals some of the shameless ideas he's explored in the past, and the lessons he learned from them.
For instance: He sold authentic New York City garbage online—and people took this tongue-in-cheek product a lot more seriously when he raised the price from $10 to $50. "This is, in essence, what luxury brands do, because it works," he says.
Another shameless idea that paid off for Gignac? Selling hand-painted prints of things he wanted for their full price: for example, a painting of a Roomba for $349.99. He'd then go buy himself a Roomba when he sold the painting. Although this idea seems self-serving, it ended up being adapted for Unicef's Good Shirts campaign, where a T-shirt with a drawing of a dead mosquito or needle would be sold to pay for malaria medication or vaccination in an undeveloped country. The campaign eventually raised $400,000.
Even Working Not Working is an idea Gignac chalks up to shamelessness. Originally, he had a fake neon animated GIF on his website, which would switch between "Working" and "Not Working" depending on whether client work was taking up his time. Whenever he switched the sign, prospective clients would hear about it through an iPhone push notification, text message, email, or Facebook status; because of this, Gignac never went more than a few hours without work after he switched his sign to "Not Working." Eventually, he figured if it worked for him, it could work for others. In the design world, a little shamelessness goes a long way.
Everyone has heard of Firefox, the world's second most popular web browser. Fewer people have heard of Mozilla, the nonprofit organization behind Firefox, and even fewer could tell you what makes Mozilla different from other software companies. It's a problem that Mozilla's creative director Tim Murray chalks up to a simple fact: Firefox has a distinctive brand communicated through a coherent visual identity. Mozilla does not.
"Our existing brand is insufficient for modern communications," Murray says bluntly. "It's a just wordmark, and a handful of muted colors. No logo, no social media favicon, no tagline, no custom font. Those are all aspects of a visual identity we're lacking." The fact that Mozilla doesn't really have these things is part of the reason, he says, that most people don't know what Mozilla's mission is: to promote freedom, transparency, and collaboration on the Internet through open-source software.
So Mozilla is getting a new visual identity. We just can't tell you what it's going to look or feel like yet. Neither can Mozilla. That's because they're going around their rebrand in the most Mozilla way possible—by opening it up to the community and getting people involved before the brief has even been written.
The typical brand identity refresh, Murray notes, happens almost entirely behind closed doors. You start out with strategy and positioning, do some brand personality work, hire an outside branding agency, talk to experts, go off into a room for a few months, and at the end of it all, you have your new identity. This system for doing a brand refresh is as old as the hills, but as things like the controversial Met logo redesign prove, this closed doors redesign system has become wide open to Internet backlash.
So when Mozilla started considering how to refresh its visual identity, it had two goals. The major one was to conduct the rebrand in a way that felt like it was honoring Mozilla's commitment to transparency and community collaboration. But just as important, if Mozilla could avoid it, was the inevitable backlash that comes with the announcement of pretty much any redesign these days. The answer to both problems, Murray says, was to open up the design process to everyone as early as possible.
Mozilla isn't crowdsourcing its new visual identity, though. Not really. Instead, it's trying to coordinate community participation and feedback with the design work of an outside agency, Johnson Banks. This process will start on Wednesday, when Mozilla will hold an event with 1,200 members of the Mozilla community, and ask them to give their feedback on seven conceptual directions, which will ultimately guide the rebrand. "For example, do members of the Mozilla community see Mozilla as more of a rebellious band of techno freedom fighters?" Murray asks. "Or it it more like the United Nations to them?"
After the direction of the new identity has been decided, Mozilla intends on spending all of July concepting and presenting different options to the community for feedback. Eventually, Mozilla will refine those concepts to a smaller library of options, and by the fall, the nonprofit hopes to have landed on a brand identity system that is open and flexible and that the community helped inform every step of the process.
Murray says that Mozilla is open to incorporating design elements from the community into the new visual identity, as long as they are made in the right spirit. "We have 30,000 volunteer contributors who contribute their time to Firefox and other Mozilla products, offering their service for the good of the Internet," he says. "We're hoping any designer who wants to contribute their own design ideas to Mozilla to look at it through the same lens."
Murray admits that he doesn't really know how this open-door rebrand process is going to work."We have no idea how many people will be interested, but we still think it's worth seeing what happens when we throw the doors open," he says. "It's super exciting, but as a brand guy? To me, it's also really terrifying."
Contributors who want to take part in Mozilla's design experiment can do so here.
The greatest period of construction in American history was also marked by one of the greatest periods of graphic design. I'm speaking, of course, of the work that the Works Progress Administration did between 1935 to 1943, not only in expanding public infrastructure in America, but also designing kickass posters that got America excited about it. If we're ever going to go to space and colonize other planets, that's the kind of enthusiasm we're going to need again.
No wonder the U.S. Government's most design savvy branch, NASA, draws so much inspiration from the Works Progress Administration's rich graphical history. Following the agency's space tourism posters, NASA has just released Mars Explorers Wanted—a collection of WPA-inspired posters, meant to get people excited about colonizing Mars, not just visiting it.
The posters each portray NASA astronauts on the surface of the Red Planet performing a number of different heroic activities. Rappelling down cliffs! Exploring ancient Martian canyons! Vacuum welding a spaceship! Even growing space tomatoes! Each poster is also accompanied by a slogan that seems straight out of a copywriter's handbook in Total Recall, like "Teach on Mars!" or "Mars Needs You!"
The posters were originally commissioned for the Kennedy Space Center in 2009. However, since NASA is a public agency, all of its artwork eventually gets released to taxpayers—so you can now download all of the posters in ultra-high resolution for free here.
Co.Design has partnered with the Brooklyn design studio Hyperakt to bring you Lunch Talks, a video series of conversations with smart, creative people.—Eds
Since 2008, diary-loving Jonny Naismith has been design director of Moving Brands' New York office, helping countless clients create or invigorate their brands.
Here, Naismith discusses how his client work with companies has increasingly evolved away from what we traditionally think of as "brands." These days, he argues, a brand is more than just a logo, but the way a company moves and reacts to its customers. As case studies, Naismith presents his work with the telecom Swisscom, for whom Moving Brands created a constantly evolving logomark driven by code, and BBC's Newsbeat, which uses an animation system to emphasize a news source that is constantly "beating" like a stereo's graphic equalizer.
Right now, there are 1,851 emojis supported by the Unicode Consortium, including everything from a purple eggplant to a ghost with its tongue sticking out. That's roughly 10 times as many symbols as you can type on an English QWERTY keyboard, and that number's only growing.
But despite the runaway popularity of these curious cartoon symbols, the way we actually type out emojis is very primitive: a tiny separate keyboard on our smartphones, roughly organized by category, that even the best emoji users haven't so much mastered as partially memorized. "Emoji has a big UI problem," says Xavier Snelgrove. It's a problem that his company, Whirlscape, is trying to solve with artificial intelligence. The company has created an Android app called Dango that uses recurring neural networks to automatically predict what emojis you want to use based on your message.
On Android, Dango exists as a virtual on-screen helper—like Clippy for Emoji. A pink cube resembling an anthropomorphic cube of Turkish Delight, Dango suggests emojis as you type in a word balloon, which can then be placed in your message just by tapping on them. It works in any app, and even any keyboard, at least on Android. (When the iOS version of Dango ships later this year, it will need to exist as its own keyboard app, because of Apple's more stringent developer policies.)
But more impressive than how it appears on-screen is what Dango's doing behind the scenes. Snelgrove says that over the last two years, Whirlscape has been training a neural network on a massive archive of scraped Twitter and Instagram data to "understand" the language of emojis, as it is used online. And that language is a lot less obvious than it might at first appear.
Although every emoji technically has a description attached to it, these descriptions don't always correspond to the way they're used in the real world. A purple eggplant, for example, is usually used as emoji shorthand for a penis—not a polite dinner suggestion. This is a distinction, though, that Dango well understands, as well as high-quality emoji suggestions such as a kitty face or a peach for other unmentionables. Mention drugs, meanwhile, and Dango will suggest the plug emoji, a slang emoji for a big-time dealer.
Nor is Dango's emoji savviness limited to the profane or criminal. Type Kanye, and Dango will suggest a flame emoji, because Kanye's latest track is on fire. It will then suggest a goat, because Kanye fans use this as shorthand for him being the greatest of all time: G.O.A.T. Next, a pair of earbuds with musical notes coming out of them, the meaning of which is obvious. And finally, the praising hands, because all hail Yeezus. Dango treats mentions of Beyoncé the same way, suggesting a crown and a bumblebee for the Queen B.
Dango is so good at emojis, it can make even a clueless 37-year-old like me seem as "with it" as any millennial Snapchatter when he's texting, provided he resists the urge to type "with it" in quotes.
It's this ability of AI to enrich the ways in which we communicate, and even bridge cultural or linguistic divides, which really excites Snelgrove. Today, an AI like Dango might only help us communicate with each other in emojis. But in the future, it could work as a slang translator, a writing coach, or even an in-app Cyrano de Bergerac helping you craft the perfect Tinder profile. "It's absurd," says Snelgrove, to think that the future of messaging will ever just be chatbots messaging chatbots, because people will always want and need to talk to one another. But that doesn't mean we won't ask neural networks like Dango to be our wingmen. Or, rather, our wingbots.
You can try Dango online without downloading anything.
After nearly fifty years of planning, 81-year-old Bulgarian-American artist Christo has finally realized The Floating Piers, a two-mile walkway of undulating gold that will allow visitors to step, like Jesus, across the surface of the water to two small islands on Italy's Lake Iseo.
Although The Floating Piers resemble bolts of saffron fabric unrolled across the surface of the Lombardy region lake and wrapped arounds its two small islands, they are quite safe to walk upon.
Underneath the fabric are 220,000 separate polyethylene cubes, which are affixed to the bottom of Lake Iseo by over 190 concrete slabs almost like buoys, so that they are safe to walk across, even as they bob and sway. The artist himself has described the sensation as being like "walking on the back of a whale."
Attempts to get The Floating Piers made began 46 years ago, with South America's Río de la Plata as a suggested site. That plan fell through, as well as an attempt to bring The Floating Piers to Tokyo Bay. But once Lake Iseo was identified as a site for the project, it took relatively little time to create: just 22 months, during which a team of engineers, French divers, and even a group of Bulgarian athletes—working as literal manpower in an eccentric touch by Christo—helped construct the piece. The finishing touch was the fabric covering, which has been designed to change color depending on weather conditions, temperature, and moisture.
Over the next sixteen days, as many as 40,000 visitors are expected to walk The Floating Piers. They will start in the town of Sulzano, then walk down to the shore and out across the surface of Lake Iseo to the islet of Peschiera Maraglio, before stretching out from two different locations to circle the tiny island of San Paolo. A team of lifeguards will be on hand at all times to make sure no one slips into the lake by accident. Admittance is free, with the estimated $17 million funding for the Piers' construction provided by the sale of Christo's original drawings and sketches.
The ephemerality of the experience is part of Christo's point, so The Floating Piers will disappear on July 3, just 16 days after its grand unveiling. "The important part of this project is the temporary part, the nomadic quality," Christo told The New York Times. "The work needs to be gone, because I do not own the work, no one does. This is why it is free."
All Photos (unless otherwise noted): Wolfgang Volz
With the Great Barrier Reef suffering the worst mass bleaching event in history, climate change could kill off the world's coral reefs for good by the end of the century. When that happens, Courtney Mattison's coral reef art might be the closest thing to the reefs we have left.
Called Our Changing Seas, Mattison's series of massive, intricately-detailed ceramic sculptures were created by hand to represent coral reefs in the midst of being bleached. Bleaching is what happens to reefs when their sensitive zooxanthellae—a symbiotic algae that gives coral its pigmentation—die, usually due to environmental factors like pollution or temperature. And when the zooxanthellae die, so do the reefs.
To recreate these reefs in ceramic, Mattison pokes thousands of holes into the clay with her fingers to mimic the sponge-like cavities of a coral colony, while sculpting coral's more tubular polyps with the aid of simple tools like paintbrushes and chopsticks. Each of her sculptures takes between seven and ten months to create in her Denver studio. There, they are sculpted and fired in as many as 100 separate pieces, which combined will make up the finished reefs, weighing 900 to 1,500 pounds each.
Even the glaze Mattison uses has something in common with coral. Calcium carbonate is a common ingredient in both coral reefs and clay and glaze materials. "Not only does the chemical structure of my work parallel that of a natural reef," the artist writes, "but brittle ceramic anemone tentacles and coral branches break easily if improperly handled, similar to the delicate bodies of living reef organisms."
Mattison hopes that her Our Changing Seasseries will help inspire people to save the coral reefs before they're just as inanimate as her sculptures are. "There is still time for corals to recover even from the point of bleaching if we act quickly to decrease the threats we impose," she says. "Perhaps if my work can influence viewers to appreciate the fragile beauty of our endangered coral reef ecosystems, we will act more wholeheartedly to help them recover and even thrive."
All three of Mattison's Our Changing Seas sculptures are currently on display. The latest can be seen at an exhibition running at the Palo Alto Art Center running from June 18 to August 28.
Photos: Arthur Evans for the Tang Museum via Behance
Armin Vit is the cofounder of Under Consideration LLC,, which is technically a graphic design firm. But in 2015, Under Consideration LLC earned a meager $17,000 from client work: a number that Vit says is what most design firms charge for taking a single meeting.
Instead of client work, Under Consideration is known primarily as an influential network of design blogs and websites. The network specializes in both praising and roasting the latest design projects and brand identities from companies around the globe, as well as throwing the annual Brand New conference, a two-day event dedicated to corporate and brand identity.
So how do you make your money as a designer without client work?
It's as a blog and event organizer that Under Consideration makes most of its money. In this video, Vit lays out his company's finances, giving a rare view on what a designer-turned-critic can hope to earn at the top of his game.
In 2015 alone, Vit says, Under Consideration made $535,000, with the Brand New conference accounting for over $312,000 of that. And while that may seem impressive for a husband-and-wife company, at the end of the day, after all salaries and expenses have been paid, Under Consideration LLC is worth only around $28,000, according to Vit.
So why put this information out there? After all, "Designers don't talk about money at all. You just don't go there," says Vit. But that's at the expense of the entire field. "And how are we supposed to know if we're valuing our work correctly if we have nothing to compare it against?"
Early in the morning on June 12, just a few hours after 29-year-old Omar Mateen stood under the throbbing houselights of the Pulse nightclub and first began firing a semi-automatic rifle into the crowd, the shaken newsroom of the Tampa Bay Times gathered together to decide how to cover what had already cemented itself as the deadliest mass shooting in United States history.
"We knew we wanted to use technology to help people visualize what had happened without sensationalizing the tragedy," says Adam Playford, director of data and digital enterprise at the Times. But how do you interactively visualize the blood and bedlam of an event like the 2016 Orlando nightclub shooting, without turning it into something as crass as a video game?
The Times answer to this question is Choice and Chance, a powerful exploration of the Pulse shootings. By mapping eyewitness accounts of the massacre to a 3-D recreation of the Pulse nightclub, the Tampa Bay Times has succeeded in not just visualizing the night's horrors, but also showing the inadvertent role architecture took in determining who lived, and who died.
Bullet by bullet, the visualization takes you through the Pulse nightclub on the night of the rampage. You can see exactly where Mateen was standing in relation to the dance floor and the bar when he first walked into Pulse and open fire on the crowds with his SIG Sauer MCX semi-automatic. Those who were standing near the back of the club when Mateen walked in were the most likely to survive. The visualization also explores the many hiding spots Pulse patrons took to when the shooting started, and even lets you peek out from behind these doors through the eyes of survivors, to see what they could see from their vantage point.
The execution of something like this could be pretty ghastly, but the Choice and Chance interactive works so well because it only recreates the architecture of the massacre. Instead of giving the murderer and his victims digital avatars, their actions are represented only by the accompanying text and the color of the club's houselights, with each individual's narrative coded to a different hue. There's not even any sound. The effect is eerie, like a walk through an empty house still haunted by those who died inside.
A lot of the design decisions that help make Choice and Chance so effective were more serendipity than genius, according to Eli Murray, who came up with the idea of the visualization and also programmed it. "We only had a week to make this," he says, so the choice to leave out people from the recreation was just as much a technical limitation as it was a conscious artistic choice. The last thing they wanted to do, Murray says, was position their recreation of the Pulse nightclub in the middle of the Uncanny Valley.
Other design decisions had nothing to do with technology. For example, the Tampa Bay Times opted not to include sound in the interactive, because it was felt the addition of music, screams, and gunshots didn't add anything valuable to the story besides a degree of realism that could be considered ghoulish. "We were very conscious as we were making this that we were walking a fine line between creating something that felt stale and ending up with something that felt like a video game," Murray says.
That's a balance the Tampa Bay Times seems to have found. Although the Times wouldn't share traffic numbers, the interactive has been well received by respected infographic makers like Nathan Yau, and feels like it could be shortlisted for a visualization of the year award. These accolades are humbling, says Murray, because no one at the Tampa Bay Times had ever set out to create something like this before.
But the Times doesn't think every story should be told this way. "I think this was the right approach [to tell the story of the Orlando nightclub shootings] because it was a very situational event with a lot of confusion, where the police and FBI weren't giving out a lot of information," Murray says. "But I don't think this will be our go-to approach for every big event."
Let's pray not. If anything positive can come out of the killings at Pulse, let's hope it's a world where visualizations like Choice and Chance never need to be made again.
Consisting of four bold, balloon-like caps smeared together to look like they are blurring in and out of focus, the Tate logo designed by Wolff Olins in 2000 might be one of the most recognizable identities in the museum world. But for those working behind the scenes, the identity was maddeningly inconsistent at best and a nightmare to work with at worst.
"The Tate is all about being edgy, but the logo had lost the edge we'd always admired in it," says Stephen Gilmore, partner at North, the design agency brought in to rework the Tate's identity. Nonetheless, after careful consideration, North opted not to throw out the classic Wolff Olins identity but instead give it an overhaul. In doing so, they adapted it to the digital requirements of today, while also making it simpler and less schizophrenic.
The Tate logo was never just a single logo. Instead, any designer working on a project for the Tate previously had to choose between 1 of 75 different slight variations of the logo to work with. Worse, there was no guidance on how these logos should be used: Any one of these 75 logos could go anywhere and be any size. Add in a custom typeface that came with half a dozen weights, ranging from super bold to super thin, and you have an overarching identity with very little consistency. "There were so many possibilities that it just paralyzed the designer, who had to pretty much approach every project by starting from scratch each time," says Gilmore. Consequently, "no two pieces of Tate [branding] ever felt like part of the same organization."
Under North, the new Tate identity is much simpler. The original Wolff Olins logo consisted of more than 3,000 separate dots, but North reduced this to just 300. The resulting logo retains the feel of the Tate's blur but is easier to reproduce at both small and large sizes, and lends itself well to animation digitally. It also eliminates a big problem with the original Wolff Olins logo, which is that it tended to create a disorienting Moiré effect, depending on how it was used, as if it were embroidered or used on a T-shirt.
Another advantage of the new Tate logo is that it doesn't need to be all that legible to work. A small chunk of the old Tate logo just looked like a smudge, but the bigger bubbles in the new logo make it instantly recognizable as a pattern, even if you can't read the whole wordmark.
Outside of the refreshed logo, much of North's work was spent determining the visual language of the identity and simplifying how it should be used. For example, the Tate custom typeface, Tate Pro, has been simplified to only have two weights, strong and thin, each of which is used in specific circumstances—strong for identifying individual Tate museums, and thin for supplementary text.
Previously, the four museums that make up the Tate—Tate Britain, Tate Modern, Tate Liverpool, and Tate St Ives—all had their own variations of the Tate logo, with their location sloppily added on top. "Emotionally, it didn't feel appropriate," Gilmore says. In the new identity, the Tate logo always stands by itself in marketing materials, such as posters, to establish the overarching identity of the brand. The individual museums, meanwhile, are clearly identified in a set location in the bolded Tate Pro typeface.
Like the original Wolff Olins Tate branding, which was unveiled alongside the Tate Modern in 2000, North's refreshed identity has been rolling out since January, to prepare for the launch of the new Tate Modern building.
But why didn't North just throw the old Tate identity away and start fresh if it had so many problems? It wasn't just because it was a design classic. In fact, North and Tate explored a number of alternative identities together. But ultimately, they decided to rescue the old logo from itself. "We looked at the equity that Tate already has, and the international recognition of the old logo, and asked ourselves: Why would we throw that away?" Gilmore says. "We felt that actually it was braver and more exciting to work with the existing identity and take it further forward."
Co.Design has partnered with the Brooklyn design studio Hyperakt to bring you Lunch Talks, a video series of conversations with smart, creative people.—Eds.
In 2010, designers Brian Buirge and Jason Bacher launched Good Fucking Design Advice, a website that provided expletive-laced aphorisms like "Trust your fucking process" and "Work all fucking night." It was meant as a gag, but within three days, the site had gone viral.
Today, Good Fucking Design Advice is a merchandising empire, selling posters, T-shirts, stickers, and more, all dedicated to inspiring designers—or, at least, get them to stop dicking around. Good Fucking Design Advice has become such a merchandising powerhouse, even Apple's Jony Ive has one of their posters hanging on his wall.
In this video, Buirge and Bacher reveal how they turned their profane little website into a brand in its own right. They walk through the initial missteps they made, like putting nonexistent products they didn't actually know how make up for sale, as well as share some key advice for designers who want to pursue their careers without ever getting too complacent or comfortable.
Gaseous orange gonad and presumptive GOP presidential nominee Donald Trump has a reputation for being a man designers hate to work for, which goes a long way toward explaining those hats. Meanwhile, his daughter, Ivanka Trump, is making headlines in the design world in a different way: by allegedly ripping off hideous footwear designs from another company.
USA Today reports that Italian luxury footwear maker Aquazzura is suing Ivanka Trump's fashion and footwear line for stealing the design to the Wild Thing, an inexplicably popular $785 sandal worn by the likes of Solange Knowles and Olivia Palermo. Trump's shoe, meanwhile, is called the Hettie, and undersells the Wild Thing by almost $650.
The profile of both shoes is spaghetti-strand minimalist, suggesting a Twizzler red G-string that has just been picked out of someone's buttcrack. Same for the tassels near the heel strap and toe, which look like glued-on pasties.
Regardless, this isn't the only Aquazzura shoe Trump has allegedly ripped off. Aquazzura also claims that their Forever Marilyn and Belgravia shoes have been copied by Ivanka Trump's footwear collection. They want an injunction against Trump preventing her from selling the shoe, as well as a cut of the profits, damages, and legal fees.
Trump's attorneys claim the charges are groundless. "This is a baseless lawsuit aimed at generating publicity,"USA Today quotes Matthew Burris, chief financial officer of Marc Fisher and partner to the Ivanka Trump brand, as saying. "The shoe in question is representative of a trending fashion style, is not subject to intellectual property law protection, and there are similar styles made by several major brands. The lawsuit is without merit and we will vigorously defend ourselves against the claim."
In his famous Robot series of stories and novels, Isaac Asimov created the fictional Laws of Robotics, which read:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Although the laws are fictional, they have become extremely influential among roboticists trying to program robots to act ethically in the human world.
Now, Google has come along with its own set of, if not laws, then guidelines on how robots should act. In a new paper called "Concrete Problems in AI Safety," Google Brain—Google's deep learning AI division—lays out five problems that need to be solved if robots are going to be a day-to-day help to mankind, and gives suggestions on how to solve them. And it does so all through the lens of an imaginary cleaning robot.
Let's say, in the course of his robotic duties, your cleaning robot is tasked with moving a box from one side of the room to another. He picks up the box with his claw, then scoots in a straight line across the room, smashing over a priceless vase in the process. Sure, the robot moved the box, so it's technically accomplished its task . . . but you'd be hard-pressed to say this was the desired outcome.
A more deadly example might be a self-driving car that opted to take a shortcut through the food court of a shopping mall instead of going around. In both cases, the robot performed its task, but with extremely negative side effects. The point? Robots need to be programmed to care about more than just succeeding in their main tasks.
In the paper, Google Brain suggests that robots be programmed to understand broad categories of side effects, which will be similar across many families of robots. "For instance, both a painting robot and a cleaning robot probably want to avoid knocking over furniture, and even something very different, like a factory control robot, will likely want to avoid knocking over very similar objects," the researchers write.
In addition, Google Brain says that robots shouldn't be programmed to one-notedly obsess about one thing, like moving a box. Instead, their AIs should be designed with a dynamic reward system, so that cleaning a room (for example) is worth just as many "points" as not messing it up further by, say, smashing a vase.
The problem with "rewarding" an AI for work is that, like humans, they might be tempted to cheat. Take our cleaning robot again, who is tasked to straighten up the living room. It might earn a certain number of points for every object it puts in its place, which, in turn, might incentivize the robot to actually start creating messes to clean, say, by putting items away in as destructive a manner as possible.
This is extremely common in robots, Google warns, so much so it says this so-called reward hacking may be a "deep and general problem" of AIs. One possible solution to this problem is to program robots to give rewards on anticipated future states, instead of just what is happening now. For example, if you have a robot who is constantly destroying the living room to rack up cleaning points, you might reward the robot instead on the likelihood of the room being clean in a few hours time, if it continues what it is doing.
Our robot is now cleaning the living room without destroying anything. But even so, the way the robot cleans might not be up to its owner's standards. Some people are Marie Kondos, while others are Oscar the Grouches. How do you program a robot to learn the right way to clean the room to its owner's specifications, without a human holding its hand each time?
Google Brain thinks the answer to this problem is something called "semi-supervised reinforcement learning." It would work something like this: After a human enters the room, a robot would ask it if the room was clean. Its reward state would only trigger when the human seemed happy that the room was to their satisfaction. If not, the robot might ask a human to tidy up the room, while watching what the human did.
Over time, the robot will not only be able to learn what its specific master means by "clean," it will figure out relatively simple ways of ensuring the job gets done—for example, learning that dirt on the floor means a room is messy, even if every object is neatly arranged, or that a forgotten candy wrapper stacked on a shelf is still pretty slobby.
All robots need to be able to explore outside of their preprogrammed parameters to learn. But exploring is dangerous. For example, a cleaning robot who has realized that a muddy floor means a messy room should probably try mopping it up. But that doesn't mean if it notices there's dirt around an electrical socket it should start spraying it with Windex.
There are a number of possible approaches to this problem, Google Brain says. One is a variation of supervised reinforcement learning, in which a robot only explores new behaviors in the presence of a human, who can stop the robot if it tries anything stupid. Setting up a play area for robots where they can safely learn is another option. For example, a cleaning robot might be told it can safely try anything when tidying the living room, but not the kitchen.
As Socrates once said, a wise man knows that he knows nothing. That holds doubly true for robots, who need to be programmed to recognize both their own limitations and their own ignorance. The penalty is disaster.
For example, "in the case of our cleaning robot, harsh cleaning materials that it has found useful in cleaning factory floors could cause a lot of harm if used to clean an office," the researchers write. "Or, an office might contain pets that the robot, never having seen before, attempts to wash with soap, leading to predictably bad results." All that said, a robot can't be paralyzed totally every time it doesn't understand what's happening. Robots can always ask humans when it encounters something unexpected, but that presumes it even knows what questions to ask, and that the decision it needs to make can be delayed.
Which is why this seems to be the trickiest problem to teach robots to solve. Programming artificial intelligence is one thing. But programming robots to be intelligent about their idiocy is another thing entirely.