If you've ever wondered how Transformers like Optimus Prime and Bumblebee would function on a cellular level, you may have your answer. A cross-disciplinary team at Harvard University just published a paper describing a new technique for creating transforming 3-D structures that can deform in dozens of ways—collapsing down to virtually nothing when not in use, but remaining incredibly strong when engaged.
To create this transformable "metamaterial," the Harvard team looked to snapology, a type of modular origami that uses folded paper ribbons that are snapped together to form 3-D shapes. Some of the more ball-like structures in snapology, like the icosahedron, are remarkably rigid, while others, like the extruded cube, can easily collapse down into nothing.
The Harvard team started with a single shape based on these ideas: a cross-shaped polyhedron with 24 faces and 36 edges. They embedded this basic building block with pneumatic actuators—essentially, air-powered hinges—that operate to fold or unfold the block. By pumping pressurized air through the actuators, the team at Harvard demonstrated how to deform the polyhedron through any of its hinges. They can collapse down to practically nothing, as well as fold, twist, and bend.
Where things got really interesting, though, is when the researchers linked these cells together. By aggregating many of these shapes into a single structure, they found that they could create programmable, transformable objects of practically any size. What's more, they say that these structural networks could use pretty much anything (including water, heat, solar power, and so on) as a power source—all they need are the actuators in the right places.
While self-transforming structures could be useful across many fields, one place this could be immediately useful is space. For example, a collapsed shelter could be sent to Mars in a small payload ahead of a manned space mission. Once there, the rays of the sun would allow it to unfold itself, ready for use when the first astronauts touch down. But there are equally interesting uses here on Earth. Imagine an enclosed stadium that, without electricity, could open up its roof when it was sunny, a wall that could open up into a window, or a disaster relief shelter that could deploy itself. Heck, you could potentially even use Harvard's metamaterial to create an umbrella that automatically opens when it rains.
Another possible use case comes from medicine. Since this technology could be miniaturized down to the nanoscale, it's a great fit for embedded medical devices. In fact, the team suggests that one possible use would be in surgical stents: Getting a stent installed could then become as easy as getting a shot.
Slowly but surely, America is winning its war on smoking. In 1970, 37% of all Americans smoked cigarettes. Today, it's less than half of that. Every segment of the population is smoking less, except for one: the severely mentally ill. Of the 13.6 million Americans suffering severe mental illnesses including schizophrenia, major depression, and acute bipolar disorder, an astonishing 80% smoke. People with severe mental illnesses are believed to consume one out of every three cigarettes sold in this country.
Funded by the National Institute of Drug Abuse, Learn to Quit is the first mobile app that has been specifically designed to help the severely mentally ill quit smoking. It was developed by Smashing Ideas, a Seattle-based digital design firm, and Roger Vilardaga, a professor at the University of Washington's department of psychiatry and behavioral sciences. Here's how they tweaked the design to suit the needs of their unique user base.
The goal of Learn to Quit is to help patients quit smoking using Acceptance Commitment Therapy, a therapeutic technique that teaches acceptance and mindfulness through a series of exercises. Although they are often successful in treating addiction, anxiety, and depression, these concepts are largely abstract—which can be particularly difficult for the severely mentally ill to grasp through words alone.
"In his research, [Vilardaga] had found that illustration and visuals were really important to getting patients to understand these abstract ideas," says Chad Otis, executive creative director at Smashing Ideas. "So he'd come up with about 300 Post-its worth of simple stick drawings to explain the concepts, which we worked to bring to life."
For Learn to Quit, Smashing Ideas worked with Vilardaga to literally visualize the exercises. In the app, a user controls a cute, cartoon-style avatar on a journey through a world of urges. As they navigate these urges, they complete a series of exercises about acceptance, mindfulness, health, and addiction. Use of explanatory text is kept to a minimum whenever possible; instead, animation and illustrations are used to visualize particular techniques.
The result is an app that almost feels like it's aimed at children—but which patients find friendly and easy to understand, even if they have trouble reading. It's also quite funny, with jokes sprinkled throughout. "Humor's something every patient we tested really reacted positively to," says Vilardaga.
In the world of mobile app design, swiping is ubiquitous. You swipe your iPhone to lock it, to navigate screens, or to scroll up and down.
But swiping—especially precisely swiping on a UI element, like your iPhone's unlock slider—is actually a big challenge for patients with severe mental illnesses. "About half of the patients we see have tremors, often due to the medication they are on," explains Vilardaga. "This has a collateral effect on their fine motor skills, for things like using touch screens and keypads."
For Learn to Quit, the design team had to find ways to tailor the interface to these issues. For example, most of the interaction a player has with the game is done through simple tapping. The app offers up large balloon-like buttons, so when Learn to Quit asks a patient to enter some data—for example, how many cigarettes they've skipped—it's more forgiving about where they tap. But many screens don't even have buttons at all. You can tap anywhere to move forward.
You don't think about it, but most apps are like choose-your-own-adventure novels. When every possible screen you can go to is fully mapped out, the resulting diagram is incredibly complex. It's like navigating a maze. Whether we're aware of it or not, using any app requires us to build a mental model of where we are within it.
But those with mental illnesses tend to have a harder time creating complicated mental models. So for Learn to Quit, Smashing Ideas had to try to flatten the entire app's navigation structure. "We spent a lot of time simplifying wayfinding and backtracking. We also worked hard to simplify areas where there were too many subscreens, to minimize cognitive overload," Otis explains.
This strategy of minimizing overload extended all the way down to the colors used in the app's interface. "This is aimed at a pretty broad group of patients, some of whom can be overwhelmed by colors that are too flashy or bright," explains Vilardaga. So when it came time to pick a color palette, Smashing Ideas chose one that was nice and muted without being depressing, which ended up including lots of saturated blues, creams, and oranges.
Plenty of motivational and self-help apps use gaming elements to incentivize users to keep going—but this was a particularly important aspect of the design process with Learn to Quit.
"A big thing our interviews showed going in was that they found the idea of winning points or playing games like Bingo incredibly exciting," Vilardaga says. "So we knew from the beginning that having some kind of gaming element in our app was going to be very important if it was going to be successful."
That's why Learn to Quit has tons of game aspects, such as achievements and stats. The app tracks your progress, rewards you with achievements over time, assigns you stars, and even keeps "score" of how much money you've saved from cutting down on smoking so far.
In many ways, designing for the mentally ill is about following the same best UI/UX practices as any other app, Otis told me. But there are some key differences—especially with regards to maximizing user friendliness and minimizing cognitive load—that Smashing Ideas needed to keep in mind to make an effective app.
Of course, how effective Learn to Quit actually is at preventing patients from smoking is still unknown. Yet unlike most self-help apps, its impact will be scientifically determined, and soon. Because it was funded by the National Institute of Drug Abuse, Learn to Quit will actually undergo clinical trials, starting next month and continuing through July, to see how effective it is at getting 90 patients to quit smoking.
Unfortunately, that also means the app won't be available to download by the general public until those trials are over. When Learn to Quit is eventually released, though, expect it to come to Google Play first.
Adaptive design, or design that specifically aims itself at adapting to the needs of groups with special usability requirements, is a design school that many industries are finally starting to take seriously. That includes giants such as Microsoft and Google, but also smaller companies and fashion designers. With Learn to Quit, they finally have a corollary in app design.
There have been whole books written about the visual language of comics. A curlicue above someone's head means he's dizzy. Straight lines shooting out of an object means it's moving fast. A series of lines coming out of an object that trace increasingly smaller hills means it's bouncing. And so on.
Now, here comes the prolific Japanese design firm Nendo, applying that language to a bunch of... chairs? For the upcoming Milan design fair, Nendo created 50 chairs for Friedman Benda, each of which adapts one design element from manga, or Japanese comics, into an otherwise stock design.
So one of Nendo's chairs has stars shooting out of it, which in manga usually means a character is stunned. Another is surrounded by a swirling vortex, showing it's dizzy. Another has straight lines shooting up from it, making it look like it's falling, while its sister chairs have similar lines coming from the left and right, making it seem as if they are speeding. There's a wobbly kneed chair, a jittery chair, and even a chair with a speech bubble coming out.
The chairs are otherwise as simple as they can possibly be in material, texture, and geometry. That was a deliberate decision, says Nendo. The firm notes that manga tends to be in black-and-white, an effect mimicked by their chairs' stainless steel construction. Even the way that each chair is made up of a series of squares and rectangles reflects the grid-like structure of a comic page.
Designed for Nendo by Oki Sato, the 50 Manga Chairs will be on display at the Milan design fair from April 12 through April 17.
Parking is a nightmare in Islington, a borough of London that's crammed with as many people per meter as Chennai, India—the seventh most densely populated city on Earth. Sixteen different permits dictate when and where you can park. Rules are written on signs so poorly designed that you have to jump out of the car to read them up close. And then there's Arsenal soccer team, whose stadium is situated in the borough. When there's a match on, all the rules change, and parking in Islington moves from "nightmare" to "hell on earth."
So when they set out to build an app that could "fix" parking, Ford and Ustwo figured Islington was the place to start. The auto maker and digital product agency teamed up to create GoPark, a new app that aims to cure the design problem of parking, to create a solution to Islington's woes first, and hopefully extend that solution to other cities around the world.
The stakes are big. "If we can come up with a solution to London's parking problem, it has knock-on effects when it comes to air pollution, congestion, and road safety," says Tim Smith, a London-based designer at Ustwo. "It means less people getting fines, and traffic wardens being used more efficiently, and less administration overhead. I hate the term, but there's no other way to say it: It's literally a win-win for everyone."
The problem in Islington is two-fold. First of all, there are only so many spaces. According to Ford, up to 30% of all traffic on London roads at any given time is just people circling for a spot. Then there's the complexity of the parking rules themselves, which have more than 200 permutations dictating whether you can or cannot park in any given spot. Most of the people ticketed in Islington aren't really parking scofflaws. They just don't understand the rules. They park somewhere, thinking they're fine, and get a ticket anyway—which, in turn, leads to trust issues.
First and foremost, GoPark aims to make it easy for Londoners to know if they are allowed to park in a given spot in Islington. Using your phone's GPS, the GoPark app shows an interface with your car's location and any parking spots nearby. The app already knows what permits you have, so if a parking spot near you is green, it means you can park there. If it's red, you can't. For drivers who prefer to plan their parking ahead of time, there's a calendar view; the app will also tell you how long you can park at any given spot. But ultimately, the UI is simple: a map view of where you are and the spots around you where you can park.
It took the design team time to land on that visual solution, though. In the first iterations of GoPark's interface, this map view was only visible on one or two screens. Yet in early testing, one thing Ustwo discovered was that drivers in Islington had developed significant trust issues about parking—and a simple map turned out to be the most compelling way to gain that trust.
"The biggest thing we learned was that Londoners don't trust anything involving parking enforcement," Smith says. "They don't trust the council, they don't trust the signs, they don't trust the parking enforcement officers, and they didn't trust our app." By putting a real-time map on every screen, GoPark constantly reassures users that it has a 100% accurate understanding of where their car is. Meanwhile, graphic signifiers on the screen soothe users, reiterating that the app knows what permit they have. Those two things were enough, Smith says, to mitigate a lot of the trust issues London parkers have.
Now available to beta testers on Android and iOS by invite only, Ford and Ustwo intend to add new features to GoPark over time, such as predictive parking, where the app tells drivers where they're most likely to find a free space based on historical data. They also intend on implementing a payment option inside the app, so that drivers can pay for a metered spot without feeding coins into a machine. In fact, this is why Ford is experimenting with the app in the first place: The project is overseen by Ford Smart Mobility, the division within Ford that explores revenue streams outside of just pushing more cars on the road.
The ultimate goal, though, is to break GoPark free of Islington—first to bring the app to the rest of London, and then to other cities. Because if GoPark can fix parking in Islington, the likes of New York, Tokyo, or San Francisco will be a piece of cake. But maybe not Chennai.
It's sometimes hard to remember that iTunes once solved a real problem, given what a bloated mess it has become. Digital music was taking off, and so people had folders upon folders of random MP3s, mixed in with all the other stuff on their hard drive. iTunes gave order to this madness, organizing all the songs on a computer, making them searchable and — with an iPod — syncable.
The Noun Project, an online vault for visual language, wants to solve a similar problem that faces designers. Its new desktop app Lingo, is designed to take the endless folders of icons, illustrations, colors, UI components, and other elements that designers need to work with on a daily basis, and bring order to the chaos, organizing these libraries so they can be easily searched—and, even more importantly, synced with colleagues and teammates. According to Ben Blumenfeld, cofounder of the design-focused investor Designer Fund, which has invested in The Noun Project, the app has become part of the workflow for design teams at Airbnb, Dollar Shave Club, Snapchat, and more.
Right now, the primary way that designers keep track of their visual assets is through file explorers like Finder. That's a problem, argues the Noun Project's Edward Boatman, because Finder isn't designed for navigating visual assets. Designers routinely have to browse through endless lists of file names, or navigate an intractable series of nesting subfolders, just to find the asset they want. And if they want to share their assets with a colleague, there's no good way to do it, short of just zipping up the entire folder structure. "With Lingo, we wanted to give an alternative to the messy folder and Finder paradigm that's just so omnipresent in our digital world," says Boatman.
To use Lingo, the first thing you do is import the visual assets you're working with into the app. Once you've copied or dragged-and-dropped them in, Lingo uploads them to the cloud, automatically tagging them by scanning the filenames for words. Once your assets are in Lingo's database, you can organize them into collections, add notes and tags to them, favorite them, or search through them. And if you want to export a Lingo asset, all you need to do is click the asset, then drag it as either an SVG, PDF, EPS, or PNG file to your desktop or another design app.
Lingo also helps keep the visual assets of large design teams synced. So if you're part of a team, you can subscribe to different kits, or channels of assets, as easily as you join or depart a channel on Slack. So instead of having to email a new designer your company's icon assets, for example, she can just subscribe to its corresponding Lingo kit.
Matt Reamer, a designer at Dollar Shave Club, has been using Lingo for about six months and says it has become an essential part of the company's work flow. "As Dollar Shave Club grows and launches more products, new features and improving existing features we have a plethora of assets we need to manage," he says. "From icons the UX team uses in flow diagrams and wireframe assets, it gets hard to manage. Then when you take that and add in product shots images, lifestyle imagery, social posts, illustrations and content visuals for our monthly member magazine, it's chaos. Lingo really helps to take the chaos of asset management company-wide and distill it down."
Lingo might seem like a strange offshoot for The Noun Project, best known for creating an online dictionary of universal symbols. "Our mission is creating, celebrating, and sharing visual language," Boatman says. "But visual language is so much bigger than just iconography. It encompasses colors, illustrations, photographs, even GIFs." Lingo is The Noun Project's way to move beyond icons. Ultimately, Boatman says the goal is for Lingo to become "the default browser for all the world's visual assets."
Of course, the real question remains: Do designers want to have to deal with another app? Working with images in folders may be inelegant, but it has the benefit of not being another piece of software to install. To art this story, I asked the Noun Project to send me images, which they did through Lingo. So instead of forwarding an email with a zip file to our art editor, I had to install Lingo, login, download all the images, then attach it, and forward it. That's a lot of friction added to what should be a simple exchange, if you're not going to buy into Lingo entirely as what Boatman would call the default browser for all your computer's visual assets.
But if you are willing to do that, Lingo may well simplify your workflow. You can sign up for Lingo here. Although Lingo is free to use for individuals, its more advanced features (like keeping your design assets synced across a team) require a monthly "Pro" subscription. Boatman also says that they plan on adding marketplace features so that creators can sell and distribute kits of visual assets through Lingo, with a similar 60/40 revenue split to the one they currently use with their partners at The Noun Project.
In France, light pollution laws prevent retailers from illuminating their shop windows between 1 a.m. and 7 a.m. But one day, Sandra Rey dreams that if you went for a midnight stroll down the Champs-Élysées, the shop windows and signs would glow an otherworldly glow, from a source of illumination usually only seen darting through the dark waters of the ocean at night.
To make that happen, Rey's company, Glowee, just launched its first commercial product: a bacteria-powered light that the company is marketing to retailers through France. It works by trapping a bacterium called Aliivibrio fischeri, which gives sea creatures like the Hawaiian bobtail squid their bluish bioluminescent glow, inside a transparent pack of nutrient gel. As the bacterium eats, it gives off a blue-green glow, with about the intensity of a nightlight. That doesn't sound like much, but "the great thing is that even if the intensity is lower, the fact that [the light is all] surface [area] provides bigger illumination zones," Rey explains over email.
Even so, that's not enough raw lighting power to replace traditional bulbs or LEDs. But they don't have to be. In addition to retail windows at light, she envisions Glowee being used for decorative lighting, city signage, floor lighting, safety lighting, at festivals, and so on. And there are other advantages: Glowee doesn't require electrical infrastructure. The cases that hold the Glowee gel are made of fully customizable organic resin, which is about 1 centimeter thick; while they are almost envelope-like in appearance now, they could be custom-molded into any shape by artists and designers who want to create their own Glowee-powered designs.
There are challenges ahead, though. For one, replacing light bulbs with what are, in essence, bacteria-filled food bags is its own unique problem—especially with regards to how long they last. Right now, a Glowee will only emit light for three days before going dark. At that point, Glowee will come by and replace it; the fledgling company operates on a subscription-model, in which retailers essentially rent their lights. Considering that many LEDs can emit two years worth of light or more, three days seems miniscule. But it's orders of magnitude better than Glowee's initial prototypes, which only emitted light for a few seconds. In fact, by tweaking their nutrient gel, Glowee thinks that by 2017, its lights will easily last a month. And because Glowee's business is subscription-based, customers who rent a light now will see it slowly improve over time with more development.
In the longer term, Glowee is hard at work genetically engineering its bacteria to only activate their bioluminescence at night—as well as reproduce at a slower rate. That could get a Glowee lamp's lifespan up to a year or more. And Rey says the possibility of improving Glowee's intensity through genetic engineering is also "huge." For now, though, Glowee is best used for temporary signage, which is why its earliest customers—while still confidential—are in industries like retail and construction, for safety and warning lights.
But with backing from companies like ERDF, which manages 95% of France's electrical grid, Rey says she ultimately hopes that Glowee lights will make a significant impact on the world's electricity usage—around 19% of which goes to the production of light. If she's successful, nighttime in cities may take on a distinctly cyberpunk glow.
Two years go, San Francisco-based photographer Thomas Heinser was driving on I-5 in Central Valley on his way to L.A. Citrus groves lined each side of the interstate as he drove, surrounding him with explosions of orange, green, and yellow. But Heinser's attention wasn't drawn to the colors. Instead, what stuck with him was a spot that stood out like a cancer: the crumbling, desiccated ruins of an almond grove, starved of moisture and left to rot.
Since then, Heinser has been going out in helicopters and taking breathtaking overhead shots of what climate change has done to California. Although his photos are taken at over a thousand feet in the air, they often have the vivid, otherworldly feel of photomicrographs—for example, a vivid, blood-red corpuscle, running through flesh like a river, or a closeup shot of petrified wood.
All of Heinser's photographs are taken five minutes before and after sunrise, a time of day that he says gives the best quality of light. Though you'd be hard-pressed to identify them, his subjects are well-known areas of Lake County, the Central Valley, and the SFO/South Bay area. The scenes themselves include evaporation ponds, parched earth racked by drought, and woods destroyed by forest fires. In all of these images, Heinser strives for a minimum of post-processing, and all the photos in his Reduziert series have one thing in common: They all showcase the results of man's intervention with nature and the apocalyptic effects of human-wrought climate change.
According to Heinser, when people look at his art, they often aren't sure if they're looking at a photograph or a vivid watercolor. He finds that flattering, but ultimately, he sees himself as a documentarian of what he calls "the complicated issue of beauty documenting devastation," which he hopes will inspire more people to talk about the reality of climate change.
The Milanese photographer Louis De Belle was raised Catholic, and he's long been fascinated by the materialistic side of religion. In his latest self-published book, Besides Faith, De Belle took his camera to the CES of religion: The 17th annual Koinè International Exhibition of Church Furnishings, Liturgical Items, and Religious Building Components.
Although it takes place in Venice, and 85% of its 13,000 annual visitors are Italian, the Koinè expo isn't devoted exclusively to Catholic merchandise. As De Belle's photos show, amongst the rosaries and Madonna statues Koiné dealers hock Thai silk chasubles and Greek Orthodox reliefs. Still, Roman-Catholic tchotchkes are a major focus of Koinè, and it shows in De Belle's photos.
De Belle says he was initially nervous about going to Koiné, which is not open to the public. After securing a press pass, though, he discovered that the exhibitors at the show were eager to be photographed. "The atmosphere is rather bizarre," he notes, calling it a "non-place" and a "neutral dimension."
What makes De Belle's photographs so much fun is the way they linger on the weirder artifacts Koiné has to offer, such as a glass holographic head of Saint Pio, the controversial Capuchin monk who, in the early 20th century, regularly exhibited supposed stigmata on his hands, feet, and side, corresponding to Christ's crucifixion wounds. Another image shows a shelf full of golden chalices like a Kmart for Holy Grails, and a display of dozens of Nativity Baby Jesuses of all shapes and sizes—including a visualization of the Messiah's growth in utero.
De Belle's personal favorite discovery, though, was an electronic rosary, marketed to people who have arthritis. "You just press a button and pray along," explains De Belle, noting wryly: "Devotional objects and figurines have no limits, especially in Italy."
Even the blackest of thumbs love terrariums, and why not? These self-enclosed plant ecosystems not only look beautiful, but they pretty much take care of themselves. There's only one thing they need that they need that isn't already in the jar: sunlight.
Unfortunately, there are some spaces where natural light is in short supply, like micro-apartments, retail spaces, even some hotels and restaurants. That's why German design studio Nui Studio, formerly We Love Eames, designed the Mygdal—a new kind of terrarium that also doubles as a sunlamp.
Nui Studio—comprised of designers Emilia Lucht and Arne Sebrantke—took the name Mygdal from a village in Northern Denmark, which translates to "fertile soil." Each lamp, made of aluminum and hand-blown glass, supplies light through a bank of LEDs that not only illuminate any windowless space, but provide enough light for the plants inside the terrarium bulb to photosynthesize.
According to Sebrantke, the inspiration for the Mygdal came when he and Lucht realized that plants were getting left behind in increasingly urbanized areas where they were needed most. "Nature contributes significantly to our well-being," he says. "But not everyone has a green thumb or enough daylight to grow plants." So the duo set out to design a terrarium that could nurture a plant, even without gardening skill or sunlight. It's like the high-end design equivalent of a Chia Pet!
Sadly, unlike a Chia Pet, it's hard to buy a Mygdal right now. Only a few have been made, but Studio Nui says it's working to start manufacturing them on a larger scale soon. You can stay tuned for updates here.
For years, we've been hearing about how 3-D printing was going to revolutionize the fashion and footwear industry. Unless you are a major sports star with a Nike deal, though, these "revolutionary" 3-D-printed sneakers have been largely absent. The closest we've seen to consumer products were United Nude's 3-D-printed pumps, which were designed more for the shelf than the human foot.
This month, Under Armour unveiled the Architech, a performance trainer with a 3-D-printed midsole that has been designed to help athletes stay stable during strength training. Unlike other 3-D-printed footwear, you can actually buy a pair of Architechs, at least if you get in there early. But as the Architech shows, we're still a long way from 3-D printing shoes at scale. Here's why.
Right now, whether you're buying cheap Keds or high-end Nike sneakers, all shoes are made in pretty much the same way.
First, the parts of a shoe are cut by steel dies on a hydraulic press, which almost functions like a cookie cutter. More elaborate sneaker parts, like the soles, are produced in molds. After all of the parts of a sneaker have been created, they are then stitched together on an assembly line, piece by piece.
It's a process that works well, tens of millions of times a year. But it doesn't allow for customization. The sneaker manufacturing process treats every foot as if it's more or less the same.
That's why shoemakers are so excited about 3-D printing. It doesn't really make sense to 3-D print an entire shoe—that's too expensive—but even just 3-D printing the sole could, in theory, allow shoemakers to customize each sneaker to its wearer's foot.
Wisely, then, Under Armour's new Architechs don't try to 3-D print the whole shoe. Instead, their new strength trainers—which are designed to keep athletes stable as they lift weights in the gym—are mostly assembled conventionally, except for the 3-D-printed midsoles.
Even so, Under Armour admits it just isn't ready to mass-manufacture these: Each of the 96 pairs of Architechs have been assembled at UA's Baltimore innovation lab. And the midsoles aren't being individually customized either. Under Armour, which is selling the Architechs for $300 a pair, wouldn't even tell me if they were making a profit on the Architechs. "It's definitely a different cost structure," says Under Armour's VP of training footwear, Chris Lindgren.
The Architechs, then, are essentially a statement shoe. "We're trying to dip our toe in the water, not in level of commitment to the technology, but to see how consumers react, and what we can learn from them," Lindgren says. "There are other ways to make shoes like this, including very expensive molding, but we think 3-D printing is going to fundamentally change the manufacturing model."
It feels like we've been hearing forever that 3-D printing is going to change the shoe industry, only for it to not deliver. The reason is because 3-D printing remains a twitchy technology. It's slower than molding and prone to errors, which can result in considerable time lost when a 3-D-printed object does not turn out exactly as you'd expect.
Part of the problem is that the factories in Europe, Southeast Asia, and South America where most shoes are assembled simply aren't outfitted with banks of 3-D printers yet. That's partly about economics: Why invest in a 1,000 3-D printers to make a midsole when it's cheaper to make it with injection molding?
But there's another issue at play, says Autodesk senior director of design research Mark Davis, who helped provide Under Armour the generative design technology used to make the Architechs a reality. Right now, even if factories did have those bays of 3-D printers, there'd be few qualified people to operate them. The "tribal knowledge" to 3-D print at scale just isn't there.
"In 3-D printing, you only become an expert after making a lot of mistakes, and learning from them," Davis tells me. That's true on the MakerBot level, and it's doubly true at manufacturing scale. Davis says it took Autodesk about a decade to reach the point where the company felt confident it was an expert in additive manufacturing.
"The expertise to work with these machines is still in rare supply," Davis argues. "And there's really no shortcut to get it."
Not coincidentally, 10 years from now is when Under Armour's Chris Lindgren says that he thinks 3-D printing will be a fundamental part of every sneaker factory. When that happens, buying a high-end pair of customized sneakers might look something like this.
Instead of just going into a store and trying on a new pair of kicks, you might download a smartphone app, and then use it to scan your foot's unique shape. That scan would then be uploaded to the cloud, where it would be paired with generative design software like Autodesk Within to create a a unique latticed midsole specific to your foot's morphology. This midsole would then be printed out on an additive manufacturing machine in a footwear factory hundreds or thousands of miles away, then stitched to the parts of the sneaker still being produced by traditional manufacturing methods. The finished pair of shoes, uniquely customized to your foot, would then be mailed directly to your door.
Companies like Under Armour don't expect 3-D printing to make consumer footwear cheaper. In fact, the opposite is probably true: 3-D printing will likely open up whole new avenues for footwear companies to profit.
"I don't think we see 3-D printing as lowering the cost [of footwear], so much as changing the financial model," Lindgren says. After all, wouldn't you pay more for a pair of sneakers as custom-tailored to you as a pair of Nikes are to LeBron James?
Earlier this month, I wrote about the Kinematic Petals dress, a sleek, impossibly intricate dress on display at Boston's Museum of Fine Arts that could be 3-D printed to fit any body type, no assembly required. Today, you can actually buy one.
Created by Nervous System, a team of ex-MIT generative designers based in Somerville, Massachusetts, the Kinematic Petals dress was made up of 1,600 distinct, scale-like pieces, which acted like a continuous textile when worn. Outside of just the look of the garment, one of the cooler features of the project was that the design of the dress was computer generated. So just by uploading a scan of your body, the dress pattern could change itself so that it looked good on anyone.
Now, Nervous System has teamed up with Body Labs—a company in New York that creates 3-D body models from scans—to sell the Kinematic Petals dress. To tailor the dress to your own body shape, you just go the Body Shape Explorer website on Nervous System's website. There, you drag sliders for measurements like chest, inseam, hips, waist, weight, and height to create a (not quite flattering) model of your body.
Once you've done that, navigate over to Kinematics Cloth, where you can adjust the style of the dress to your body. You can choose the shape of the dress you want, including a one-shoulder version, a cocktail-style dress, or even a skirt or tank top. You can also adjust the density of the dress's petals, and even change the width, height, length, and direction of the individual petals making up the dress. You then save your dress and contact Nervous Systems to have it printed and purchased.
Which is shorthand for, "Don't expect this dress to be cheap." Bodyways tells me the cost of a personalized Kinematic Petals dress will start at $6,000, and go all the way up to $10,000. Though it's expensive, there's reason to be excited about it: No project we've seen better shows how 3-D printing could eventually revolutionize the fashion industry—by making tailors obsolete, and custom clothes as easy to buy as placing an order on Amazon.
Who gave the world Scandinavian design? Arne Jacobsen, Eero Arnio, Alvar Aalto, or Ingvar Kamprad? Okay, sure. But another larger force played a major role in establishing Scandinavian design as well: World War II.
In an excellent piece over at Curbed, author Sarah Hucal outlines the history of Scandinavian design, and shows that before World War II, the concept didn't really exist. Hucal explains:
How did countries with disparate histories, languages, and even geographical features—Finland is known for its birch forests, while Iceland is largely tree-free—become lumped together in a single design movement? The branding of "Scandinavian design" is the result, according to some scholars, of a major international PR campaign. Solidarity between the Nordic countries grew during and after World War II, writes historian Widar Halén in Scandinavian Design: Beyond the Myth. Conferences held throughout the 1940s in Helsinki, Oslo, Stockholm, and Copenhagen concluded that "Nordic countries could be perceived as an entity when it came to design issues," according to the historian.
After World War II, the design world was looking for an antidote for the "totalitarian" International Style, which—thanks to the Bauhaus—was famously linked to Germany. While Hitler and the Nazis hated modernism, driving most avant-garde designers out of Germany early on, Hucal says that some of the first proponents of Scandinavian design seemed to see a link between National Socialism and the International Style, thanks to its perceived totalitarian ethos.
House Beautiful editor Elizabeth Gordon, who Hucal describes as a "prominent midcentury tastemaker," and who ended up popularizing Scandinavian design, described those International Style designers "dictators in matters of taste." Instead, she presented Scandinavian design as an alternative to Nazi-era design fascism: democratic, natural, minimal, intimate, and focused on the home and family, not the State. With the support of the kings of Norway, Sweden, and Denmark, Gordon brought Scandinavian design on the road with the "Design in Scandinavia" exhibition, which toured 24 American and Canadian cities between 1954 and 1957.
By the end of the 1950s, Scandinavian design was everywhere in America, and although interest waned in the 1960s, the '90s-era focus on sustainability in design brought the style back, and it remains popular today. So if you love going to Ikea or lounging around in your reproduction Swan Chair, just think: If not for World War II, so-called Scandinavian design may never have left Northern Europe.
With enough ingenuity, 3-D printers can print out pretty much anything. But they're definitely not fast—especially for heavy, complex jobs.
"3-D printing is measured in ounces per hour, so if you need to print a 100-pound object and can only do an ounce an hour, well, you do the math," Autodesk's Corey Bloome says. (We did it for you: it's 66.7 days.) That's a big problem for manufacturers in industries like aerospace, automobiles, and construction, who want to leverage the capabilities of 3-D printing to mass produce new parts with unconventional geometries.
Autodesk's solution to the problem? Project Escher, which isn't so much a new kind of 3-D printer as it is a hive-mind of 3-D printers, networked together and working in unison to create huge objects in a fraction of the time it would normally take to produce.
Here's how it works. Let's say you want to print something huge, like the blade to a wind turbine. Autodesk's Project Escher software first takes the plans for that blade and intelligently slices it apart. It then hands a slice of the finished turbine to each individual 3-D printing "bot," placed in a gantry. There's no limits to the number of bots you can have in a gantry, says Bloome, who was the hardware lead for the project, it just depends how many 3-D printers you have. Working together, each bot 3-D prints its own section of the finished piece, until it's completed as one continuous, pre-assembled object.
The result is a unique kind of 3-D printing network that prints out faster the more bots are in the Project Escher array. Bloome tells me that it's 80% to 90% more efficient. In other words, if you have five bots in a Project Escher job, the finished design will print out around 4-4.5 times faster than it would with a single 3-D printer.
Project Escher isn't for hobbyists, though. With it, Autodesk is firmly targeting industries which build big and are stymied by the slow pace of conventional additive manufacturing. "If you're in aerospace, automotive, or construction, no one has 100 hours to print out just one thing," explains Kimberley Losey, Project Escher's product marketing lead. "At that scale, there's just tons of constraints for conventional additive manufacturing. So we thought, how can we solve that problem?"
Autodesk envisions Project Escher being used in any industry where you need to print big, but also customize what you're printing. One example Bloome gives is in car manufacturing, where you might want to customize a driver's seat to the unique contours of their body. Project Escher also allows for quickly printing out large objects with geometries not possible in other forms of manufacturing.
Project Escher could also be used in an automated assembly line. While right now the gantry isn't made up of anything but 3-D printers, Bloome says there's no reason robot arms can't be added to the network, so that non-additive elements like wires and circuit boards could be embedded during the printing process. Theoretically, you could 3-D print a complex object or gadget this way, like a car, without any humans being involved in the process once they hit Project Escher's start button.
That's far in the future, though. For now, Project Escher is still in the experimental phase. The next step, says Bloome, is to get companies actually using it, so they can see how it performs in a real-world manufacturing environment.
As for the cool ass name? Bloome admits that Project Escher doesn't really have much relation to M.C. Escher. "Basically, a bunch of our engineers got together, threw a bunch of code names at the wall, and everyone voted for the coolest name," he laughs.
As seen in Star Trek, teleportation breaks down people and objects into a stream of atoms, transports them through space, and rebuilds them on the other end—a process that has all sorts of sticky philosophical issues attached to it, even if you could figure out the physics. Holoportation is a simpler alternative, courtesy of Microsoft Research, which uses 3-D capture technology to beam 360-degree holograms of other people or objects in real time to any space. All without disintegrating anyone!
Created by Microsoft Research's Interactive 3D Technologies team, Holoportation is part augmented reality, part hologram, and part Skype. It works by extrapolating a detailed 3-D model of anything standing within the field of view of eight depth-detecting 3-D cameras, then compressing it and beaming it in real time to another location, similarly outfitted with 3-D cameras. By wearing a pair of Microsoft's Hololens glasses—which, in keeping with the whole Star Trek analogy, really do sort of look like Geordi LaForge's visor—you can interact with a realistic 3-D hologram of another person, without ever stepping into the same room with them.
As demonstrated in a video by Interactive 3D Technologies' Parter Research Manager, Shahram Izadi, Holoportation has loads of interesting use scenarios. Workaholic Dads can use it to play with their kids while on a business trip, while remote workers could use it to make presentations at business meetings from across the country. Musicians could use it to stage virtual concerts, while actors could transport themselves into plays, even if they were in a different country. And because Holoportation records everything that happens as a moving 3-D digital model, you can even treat them like "living memories," shrinking them down so they replay in the palm of your hand.
It's a pretty compelling, if slightly disorienting, look at the power of augmented reality technology to erase borders and physical boundaries in the family, the workplace, the studio, and even—if we get imaginative— the bedroom. Today, we might all be mocking this photo of Mark Zuckerberg walking past an auditorium of people using VR headsets, but who knows? Next year, he might not need to physically walk down that hallway at all. He'll just Holoport.
I stand, a cyborg micronaut, in front of the biggest human heart any person has ever seen.
It is Arch Obler huge, a flesh pump the size of a skyscraper, but I manipulate it with god-like facility, rotating it in midair with my hands and bisecting it with just a flick of the finger. Through ventricles and thundering aorta I dive, floating through heart chambers so enormous, they become corpuscle cathedrals.
From all sides, I am swallowed by this living, beating heart, but I am not in danger. By just thumbing a button at my fingertips, I could easily shrink it down to the size of my hand, or make it disappear entirely.
But this is just a preview of things to come. Because I've been told that in a few years, it won't be any heart I swim through. It will be a digital twin of my own.
When I take off my goggles, I'm not standing in a human heart anymore. I'm standing in a multimillion-dollar virtual reality "cave" off I-95 in the Waltham, Massachusetts, headquarters of Dassault Systèmes.
Dassault is a 3-D software design company you probably never have heard of. The company sells powerful 3-D simulation systems to companies like Tesla that lets it, for example, crash test its latest car designs in virtual reality, no dummies required. Another client is Boeing, which uses Dassault's software to design and test plane parts virtually. The benefits are that new parts and designs can be tested without building—or destroying—physical models.
With the Living Heart Project, or LHP, Dassault is trying to bring this same technology to medicine. It simulates—in VR, or on a 3-D kiosk somewhat similar to a big Nintendo 3DS—a baseline healthy human heart, which can then be used to study things like congenital heart disease or heart defects, or how foreign bodies like medical implants or new drugs interact with it.
And that's just the start. Imagine a future in which a doctor can see what's happening in your heart by strapping on an Oculus Rift and going inside it; one in which medical students can practice on virtual patients exhibiting rare conditions they might never see in a training hospital; one where new drugs and risky new surgical techniques can be tested virtually again and again, before they are ever tried on a physical heart.
The Living Heart Project is the personal project of Steve Levine, senior director of Dassault's SIMULA division. A material scientist by training, Levine doesn't have a medical background. But what he does have is a daughter with a congenital heart deficiency.
Jesse Levine, 26, was born with a congenital heart condition in which her primary arteries and ventricles are transposed. This means that the wrong pumps and valves are responsible for keeping blood flowing properly through her body. Because her electrical system is disrupted, she had her first pacemaker installed when she was two; she has had three replacements since then.
The Living Heart Project was born out of Levine's frustration with the fact that no one seemed to understand how his daughter's heart worked on a fundamental level. "So much of medicine is learning by direct observation," Levine says. "That's great for people like you and me, because there are millions of us. But for people like my daughter? Her condition makes her one in a million."
So Levine set out to build a platform that would eventually allow him to understand his daughter's heart as a material scientist. The Living Heart Project is the result.
Right now, the Living Heart Project can't simulate exactly what's going on in Jesse's heart. Leveraging data from the Food and Drug Administration (FDA), the Mayo Clinic, and other partners, Dassault's software only simulates a "normal" heart. That's a necessary starting point, says Levine. From a mechanical engineering perspective, it's important to prove that your model can accurately simulate the baseline, before you start throwing it curve balls.
But even a "baseline" simulated heart can be useful in testing new types of implants and medicines and proving they work, which is why the FDA is so keenly interested. They've signed a five-year research agreement with Dassault to use the LHP to simulate the reliability of pacemaker leads, the wires that deliver an electrical system to a patient's heart. If the FDA embraces the LHP, it could speed up the time it takes to bring new medical advances to market.
Eventually, Dassault sees the Living Heart Project evolving so that doctors will be able to simulate anyone's heart conditions in virtual reality, just by feeding it MRI data and putting on a VR headset. People with unique heart conditions like Jesse Levine will experience a better quality of care, because doctors will actually be able to explore what Dassault calls a "digital twin" her heart to custom-tailor new treatments to her unique needs.
And needless to say, if Dassault can pull off this level of medically accurate simulation, the heart's just one project. Other organs and systems such as the brain and lungs could also be simulated, until the "H" in LHP stands for "human," not "heart."
That's still far off. Right now, Levine admits that even as just a heart simulator, LHP has about a decade to go before it can function as a "digital twin" of any patient's real heart. Dassault has just started, for example, working out how to simulate disease states. After that, there will be significant hurdles to getting the LHP accepted by regulatory bodies, which are still learning how to interpret data from virtual testing. Dassault also needs to convince doctors and hospitals that patient outcomes will be better if they use their software, which can only be proven after they've already done so. "It's a bit of a chicken-egg problem," admits Levine.
There's no doubt in Levine's mind, though, that software like the Living Heart Project—and, by extension, virtual reality as a whole—is going to eventually make a big impact on the medical industry.
"Our minds are built to work in 3-D, but for years, doctors have had to make do with trying to understand the body through one- and two-dimensional data sets," he says. When affordable virtual reality systems like the Oculus Rift reach a tipping point, Levine likens the transition doctors will make to what happened when architects were able to make the leap from pencil drawings to CAD programs. Doctors will finally be able to achieve a "fundamental understanding" of how a patient's body works through simulation and direct observation, allowing them to give better treatment, save lives, and reduce the cost of health care over time.
"I would say that VR is unequivocally the future of medicine," Levine says. "It's just a question of when."
Japan's speedy bullet trains already move so fast that you almost can't see them coming. The new train being designed for the Seibu Railway Co. by Japanese architect Kazuyo Sejima of Sanaa will be hard to see, even standing still. It's a chameleon-like train that has been designed to blend into the countryside that it is streaking through.
Scheduled to hit tracks in 2018, the new Seibu flagship train cars have an organic shape that is much different from the boxy New Red Arrow trains that currently run limited express services in the Tokyo area. Coupled with a semi-reflective skin designed to mirror the surrounding scenery, Sejima's train was designed with the stated goal to be as fun to watch blend into its surroundings as it is to ride.
The Seibu train will be the first train ever designed by Sejima, a recipient of the Pritzker, often called the Nobel Prize of architecture. It's the latest example of Japan's railways and train services turning to unconventional designers to reimagine the way trains look in the country.
Sejima says that what appealed to her about the project was the difference between designing a building, which is rooted in a single spot, and designing an object that needs to travel through many different environments.
"The limited express travels in a variety of different sceneries, from the mountains of Chichibu to the middle of Tokyo, and I thought it would be good if the train could gently co-exist with this variety of scenery," Sejima is quoted as saying in Seibu's official press release. "I also would like it to be a limited express where large numbers of people can all relax in comfort, in their own way, like a living room, so that they think to themselves 'I look forward to riding that train again.'"
Of course, in a way, a train's appearance probably makes less impact on its environment than anything else about it. Emissions, sound pollution, and the disruption of laying down miles of track are all going to be bigger problems than the sight of a train quickly passing through a given area.
It's fitting, though, watching Japan—long a country that tries to emphasize design harmony with nature—finally apply the same approach to its railroad system. Let's just hope that if Japan is going to have invisible trains, it at least makes sure everyone knows where the tracks are.
When he's not pushing the boundaries of virtual reality as part of his day job, Casey Rodarmor is something of a nostalgist for old-school computer graphics. He loves ANSI art, a more advanced form of ASCII art that supports both foreground and background colors, as well as a larger character set that includes several characters specifically designed for drawing. ANSI art first became popular in the pre-Internet days of dial-up bulletin board systems, because it was a low-bandwidth way to draw colorful graphics on text-only terminal screens. And while the BBS era is long dead, the ANSI art scene is still going strong.
What turned Rodarmor on to the idea of teaching a neural network to draw using ANSI art was the sheer ingenuity of the form. Like ASCII art, it takes incredible skill to create a recognizable picture using just 256 characters; as such, every ANSI artist has his own hallmark tricks to get the effects he is looking for. "Human-made ANSI art often contains very clever or unique details which would be very difficult for a neural network to learn," Rodarmor explains. "Neural networks need to see many, many variations of a pattern in order to learn it, so a clever flourish that appears just a few times is unlikely to be picked up by a neural network."
Not knowing what would happen, Rodarmor decided to feed 35 years worth of ANSI art into a neural network to see if it could learn the form, which in turn amounts to an almost inconceivably meager 32MB of data. He stripped out color to make it easier for a neural network to learn from the ANSI art, then gave it four days to train. After 96 hours, Rodarmor's Artnet started spitting out ANSIs of its own.
"It's always a rare treat when I write a program whose behavior surprises me, and this was one of those instances," says Rodarmor. "I had absolutely no idea what the output would look like. When it started to produce good stuff I was pretty thrilled."
Artnet's ANSIs are admittedly a little basic. Like a lot of ANSI art, many of Artnet's finished pieces resemble low-res graffiti tags, but unlike graffiti tags, you can't read them. They're all abstract geometry, which, if you squint a little, you can imagine as stylized lettering you can't quite make out. In addition, Rodarmor says his neural network is proficient at producing "huge meandering clouds of block letters" which are its version of ANSI paintings. Unlike traditional ANSI paintings, though, Rodarmor says his neural network hasn't quite become proficient at creating the Rat Fink-like monster and human characters that are a mainstay of the form.
Even if the results are a little abstract, Rodarmor says people are still connecting with his neural network's ANSI art, for many of the reasons that low-res ANSI art became popular in the first place: People are really, really good at creatively interpreting shapes. "I spent a fun 15 minutes today arguing with a co-worker who swore that one of the images looked like a picture of a coquettish woman staring longingly at Nigel Thornberry," Rodarmor says.
Last week at Microsoft's Build conference, CEO Satya Nadella said that the future of the company was "conversation as platform." In other words, less Windows and Office, and more Cortana and Tay—conversational interfaces that can understand the natural language of human users.
If Nadella thought he was expressing some unique vision of the future, though, he was fooling himself. The idea of conversational UI has quickly colonized nearly every corner of Silicon Valley over the past year. Now seems like a good time to ask: What is a conversational interface?
A conversational interface is any UI that mimics chatting with a real human. The idea here is that instead of communicating with a computer on its own inhuman terms—by clicking on icons and entering syntax-specific commands—you interact with it on yours, by just telling it what to do.
Right now, there are two basic types of conversational interfaces. There are voice assistants, which you talk to, and there are chatbots, which you type to. I'd also probably distinguish a third "fake" kind of conversational interface: the pseudo-chatbot, which mimics a chatbot in appearance but is really a traditional point-and-click GUI. Microsoft Clippy and Quartz's weird text-messaging news app are good examples of pseudo-chatbots—they borrow the visuals of a chatbot but don't actually allow you to converse beyond their canned responses.
On the voice assistant front, pretty much every major tech company in the mobile space has its own at this point. Apple has Siri, Google has OK Google, Amazon has Echo, Microsoft has Cortana, and so on. All of these voice assistants allow you to do things like play music, do a Wikipedia search, call someone, set a timer, and more—just by speaking. There are even more chatbot examples, though. Facebook has M, a human-assisted chatbot who lives within Messenger and can do anything for you from book a dinner reservation to buy you a car. Slack's Slackbot is another great example: It's a chatbot that poses as a real user in any Slack team, and is used for everything from onboarding new users to working as a notepad. There are countless other chatbots for Slack, too, including Howdy, a Slackbot, that does everything from schedule meetings to take lunch orders for your office.
Loads of reasons! For one, conversational interfaces are truly cross-platform. They work well everywhere, including smartphones, desktops, smartwatches, and even devices without screens at all, like the Amazon Echo. They can integrate with services like Twitter, Facebook, or Snapchat, or run just in a text message window. As Amazon, Google, and Apple have all shown, you can make a conversational interfaces better over time without pushing out regular updates. Conversational interfaces also mean that every single function in an app or service no longer needs to be buried in a menu, or represented by an icon.
But the biggest reason everyone is excited about conversational interfaces is because they have the potential to eliminate the underlying friction that makes it hard for every person to get things done on a computer.
Historically, computers and humans have essentially spoken different languages, with graphical interfaces as the translator. Computers are real sticklers for syntax—machine-assembly pedants who basically fall over if you tell them "hi" when they're expecting "hello." So for most of computer history, we've communicated with computers essentially through Rosetta stones: We point at a symbol representing what we want a computer to do, and then it does it. For example, clicking an icon to open an app. With conversational interfaces, computers and humans can finally speak the same language without a Rosetta stone in between.
Nope, they're pretty old, at least conceptually. HAL-9000 in Stanley Kubrick's 2001: A Space Odyssey and W.O.P.R. in WarGames are both examples of conversational interfaces in science-fiction movies.
Because technology has finally gotten good enough to make them practical. Over the last few years, advances in voice processing have made it easier than ever before for computers to understand natural language, while the rise of smartphones has put an Internet-connected microphone in every pocket. Meanwhile, AI projects like Google's Knowledge Graph and Wolfram Alpha have made computers better than ever at understanding more than just syntax, but what we actually mean.
The dream of conversational interfaces is that they will finally allow humans to talk to computers in a way that puts the onus on the software, not the user, to figure out how to get things done. That's not only the way things should be; it has the potential to totally change the way we use computers going forward.
The themes of Shakespeare's plays are timeless, but one thing's for sure: the paperback covers ain't. So for its latest edition of Shakespeare paperbacks, Penguin imprint Pelican wanted to do something a little more modern. Something that didn't look like the bald-headed barb himself had just yanked it out of his ruff. So the company turned to 24-year-old Indian artist Manuja Waldia to adapt each play's central ideas and themes into a few minimalist, vector-based icons—which Waldia even animated.
Inspired by ancient symbol-based languages like hieroglyphics, as well as minimalist iconography popular on the net today, Waldia designed new covers for six plays so far. Her Macbeth cover shows one crown emptying blood into another; for Romeo and Juliet, two arrows sitting side by side, pierced by Cupid's Arrows; for King Lear, a bearded king weeping in the rain; for Midsummer Night's Dream, the side of a Grecian urn; for Twelfth Night, a shipwreck hanging above Viola disguised as Cesario; and for Hamlet, a crowned skull sitting on a tombstone.
And that's just to start. Eventually, the plan is for Waldia to design covers for all 40 Shakespeare plays, with The Tempest, Othello, A Midsummer Night's Dream, Twelfth Night, Taming of the Shrew, and Julius Caesar coming next. "The neat thing about these covers is that it reminds people of modern environments—airport signage, the icons and imagery in their phones, modern web apps and neon signs," Waldia tells me. "Shakespeare's fan base is such a broad spectrum, it's challenging to strike a balance and still resonate with people with varying familiarity with the plays."
Now, thanks to Waldia, Pelican has a line of Shakespeare covers that should play just as well with young people as they do with Harold Bloom. Of course, once they open up the play and start reading? They're on their own.
Nathalie Miebach was born to do what she does. Her mom was a basket weaver. Her dad was an engineer who worked on the cameras of the Hubble Space Telescope. When she was a kid, she was used to a house littered with her parents' works-in-progress: half-woven baskets discarded through the house, or a scale model of Hubble abandoned on the piano, both of which she was dying to play.
Grown-up and now an artist, it's hard not to see Miebach's childhood in her colorful Sandy Ride sculptures. Although they look at first glance like elaborate constructions of painted tinker-toys, Miebach's sculptures are actually intense data visualizations of weather data from 2012's Hurricane Sandy that reflect both her father's love of science and scale models and her mother's love of weaving together complicated 3-D patterns.
In Miebach's series Sandy Ride, each sculpture visualizes weather and ocean data from a specific site trashed by Hurricane Sandy in 2012: Jane's Carousel in Brooklyn, Coney Island, Staten Island, Seaside Heights, and so on. The sculptures look like Rube Goldberg-esque roller coasters, full of toy-like details. Miebach says they're designed to bring viewers closer, luring them into a complex, multi-dimensional dataset without explicitly framing it as science.
Up close, each sculpture is actually a rigorous data visualization. Miebach starts by selecting a specific location and choosing two or three variables, like temperature and precipitation, to chart on a 3-D grid. This creates a skeleton for the finished structure, showing what happened in that area over time. Each data point in this skeleton is tagged with a number to show what it represents.
The rest of the sculpture is then built on top, evoking the character of the destroyed amusement park itself. For example, one sculpture called The Last Ride not only looks like the Jersey Shore roller coaster that was trashed by Hurricane Sandy—the height of that coaster is determined by the wind speed and gust when it struck. A dragon that snakes around the top of the coaster, meant to represent Sandy, is topped by a thicket of wind turbine-like mechanisms and Hurricane Flags. These show data about wind speed and location as captured by ocean buoys along Sandy's path.
Miebach says she was inspired to start visualizing scientific data this way after taking both an astronomy and basking weaving class at the same time in the late '90s. There, Miebach realized she could chart star data just as accurately in the criss-crossing reeds of a basket as she could in an Excel grid. "I was like, wow! I can address questions of science and data through art," Miebach says. A later 18-month residency in Cape Code spent collecting meteorological data convinced her to bring the same approach to visualizing the weather through sculpture.
As for why she specifically chose Hurricane Sandy and amusement parks for Sandy Ride, Miebach says they're not just colorful subjects: they represent mankind's hubris. "It's sort of a metaphor for the future," Miebach says. "We're rebuilding all these amusement parks right on the edge of the water, knowing full well climate change means there will inevitably be another Sandy."