Twenty-three-year-old Seattle-based designer Eleanor Lutz is one of our favorite practitioners of the art of the science visualization, but it's been a while since we last heard from her. Her latest work was worth the wait, though: four beautiful animated infographics about deadly viruses, designed to look like trading cards.
Describing viruses as "a biological version of snowflakes," Lutz designed trading cards for dengue (a close relative of Zika and yellow fever), adenovirus (which causes the common cold), chlorella (a strange virus that only affects green algae, and feeds them food), and HPV (which is linked to cervical cancer). Each virus has a five-fold symmetry, meaning they each have an icosahedral shape. All of the trading cards were designed in UCSF Chimera, a molecular modeling program, based upon data from the Protein Data Bank.
Although she calls them trading cards, Lutz says we shouldn't expect a complete set anytime soon, nor is she planning on printing them out. "I'm not actually planning on turning the animations into printed cards, but I'd like to eventually make a couple more of them at some point," she says by email.
Too bad. I would back a Kickstarter campaign for a complete set of Virus Trading Cards with special lenticular holograms that animate when you turn them between your fingers. Still, Lutz says she hopes to continue the series with more viruses some day, even if they only exist as GIFs on the web. "There are a lot of awesome virus types that I didn't get to show here, like bacteriophages and asymmetric viruses," she says.
More of Lutz's data visualization work can be found at her website, Tabletop Whale.
When it comes to getting it in from behind the 3-point line, the Golden State Warriors' Stephen Curry is the NBA player to beat. In the 2015-2016 regular season, he scored 402 3-pointers. That's the equivalent of hitting 103 home runs in a baseball season, which is impressive by any measure, but as this visualization from the New York Times's Upshot shows, Curry is not just a little better than his peers when it comes to 3-point attempts. No one else, past, or present, even comes close.
The chart tells you that at a glance. (Just look at the line tracing Curry's 3-point performance in the '15-'16 season compared with everyone else's.) But if you want to dig deeper, you can unearth plenty of intriguing data points.
The chart contains 752 lines, one for each player who managed to end up in the NBA top 20 for his season. The visualization starts in 1980, which is the first year the NBA played a full season in which players were allowed to make 3-point field goals. In that year, Ricky Sobers managed to get into the top 20 for shooting just 21 3-pointers. Today, Stephen Curry could probably shoot 21 3-pointers in his sleep.
What's truly impressive about Curry's performance in the 2015-2016 season isn't just that he soundly beat every NBA player ever. It's that he's gotten significantly better at taking 3-point shots over time. He scored 286 3-pointers in 2014-2015, and 261 in 2013-2014, and 272 in 2012-2013. Curry not only set the new record, he scored 30% more 3-point shots last season than his previous personal best, which was also a record.
The reason music games like Guitar Hero and Rock Band work is because they make it so easy to pick up an axe and shred. But as internet spoilsports are quick to point out, there's a huge gulf between playing a video game guitar and actually being proficient on a real one. Because when it comes to musical instruments, the guitar is a UI nightmare.
That's a problem a new startup called Magic Instruments wants to solve. With industrial design from Ammunition, the company's Rhythm Guitar is a sleek, futuristic-looking guitar that looks a little like Jony Ive's personal axe. Thanks to buttons that replace the frets, it feels a little like a Guitar Hero controller on steroids. Except unlike Guitar Hero, there's nothing make believe about the Rhythm Guitar. It produces real notes and chords from an internal speaker. It looks and sounds just like a real guitar, and it can be hooked up to an amp and played like one, but it's actually more like a synthesizer. All the notes and chords it produces are digital, not analog.
The Rhythm Guitar is the brainchild of Magic Instruments CEO Brian Fan. Although he's spent the past few years in tech working for companies like Microsoft and Expedia, Fan was a piano student at Juilliard for the better part of a decade. There, he spent tens of thousands of hours learning music theory. When his daughter was born, he decided to take up guitar to play her lullabies as she went to sleep. It was then that Fan discovered a horrible truth: his tens of thousands of hours of training could not prepare him for the horrors of a guitar fretboard.
"The guitar has an awful UI," Fan tells me. "The reason why is it's an artifact of physics. Players need to push down strings in very specific patterns so that when they strum, they create the right frequencies." It takes huge reserves of patience and muscle memory to play even simple guitar chords, let alone full songs. Even after hundreds of hours of lessons, Fan remained a terrible guitar player. "It got me wondering, if we could redesign the guitar's user interface from the ground up, how would we do it to encourage people to play?," he says. Magic Instruments wants to answer that question by designing a guitar that anyone can play, even if they have no musical training.
The Rhythm Guitar looks and feels like a "real" guitar, but there's little that is analog about it. It's really a computer with 15 rows of buttons running up and down the neck, replacing a standard guitar's frets. When you push one of these buttons and strum, the Rhythm Guitar produces one of 90 chords, each exactly tuned to the key and scale you want to play. According to Fan, that's enough to play 95% of rock or country songs, and the Rhythm Guitar even comes with an app that will lead beginners through a song's chords and lyrics. The fret buttons are organized logically, so as you go up the fretboard, each "fret" goes up notes on the scale. If you were in E major, the first fret would have chords in C, the second chords in D, the third chords in E, and so on.
One thing the Rhythm Guitar won't do is teach you to play a real guitar—rather, it lowers the barrier to entry for beginners who want pick up an instrument and just start playing music, no training required. It eliminates the need for you to use complicated, multi-digit muscle memory to play any given chord.
But why buy a Rhythm Guitar instead of a cheap acoustic guitar, if the Rhythm Guitar doesn't let you learn to play "real" guitar? Fan argues that the deck is stacked against anyone who wants to learn guitar past a certain point in their life. He says that the vast majority of people who learn to play an instrument do so before the age of 14, "because it's the only time in your life where you have thousands of hours to spare." The Rhythm Guitar is aimed at people who love the sound and the sex appeal of a guitar, but who recognize they will probably never learn to play a real one. And there's no shame in that, says Fan. After all, he—a classically-trained pianist—is in the same camp.
After all this talk about the guitar's UI nightmare, I asked Fan a simple question: if the guitar's so hard to learn, why does everyone want to play one? "It's because the guitar rules the stage," says Fan. The guitar makes you the star of the show: you can play with a total theatricality that doesn't come as naturally when you're behind the keyboards or drums. You can jump in the air, hit a power cord, and land in a split—drama, Fan says, that's unmatched by other instruments.
What the Magic Instruments Rhythm Guitar sets to do is open up that theatricality and fun, without requiring hundreds of hours of practice to do so. It's available for pre-order today on Indiegogo, .
Home improvement reality shows like HGTV's Love It Or List It sell homeowners a dream. By putting their home in the hands of a fix-'er-up dream team, they can avoid the headache and expense of renovating their homes themselves.
But as Deena Murphy and Timothy Sullivan recently learned, the "reality" these shows are selling is a fantasy at best—and a nightmare at worst. In a new lawsuit reported by The Miami Herald, the couple allege that after Love It Or List It did an episode on their home, they were stuck with an "irreparably damaged," vermin-ridden house, and $85,000 worth of inept workmanship for which they ultimately had to pay the bill.
According to the lawsuit, filed in North Carolina's Durham County Superior Court, Murphy and Sullivan were looking to renovate a rental property with the aim of moving into it with their kids when an ad for Love It Or List It caught their eye. The couple was enticed to deposit $140,000 in the account of Big Coat TV, which produces the show. In exchange, Love It Or List It would arrange for the house to be renovated, with all unused money being returned to the couple upon the end of the episode's filming. Or, as the lawsuit puts it: "The homeowners' funds essentially pay the cost of creating a stage set for this television series."
It seems like a strangely insubstantial pact to make with the reality TV devil. In exchange for reality TV fame, Love It Or List It would renovate a person's house on their dime, without giving them an actual say in the work being done. Love It Or List It's hosts include Hilary Farr, a touted "international home designer," and David Visentine, a Canadian real estate agent, who ostensibly would lend their expertise to the process for free. But in reality, Murphy and Sullivan's lawsuit contends that Farr and Visentine were no more than "actors or television personalities playing a role for the camera," and that "none of them played more than a casual role in the actual renovation process."
In fact, according to Murphy and Sullivan, a licensed architect was never even approached to develop their home's renovation plans. As for the actual work, it was done without the couple's approval by Aaron Fitz, a contractor with some seedy ratings on Angie's List. Although Fitz was paid $85,786.50 by Big Coat for work done on the house, Murphy and Sullivan say their home was "irreparably damaged," filled with "low-grade industrial carpeting, unpainted surfaces and windows painted shut," and holes in the floor thanks to shoddy duct work "through which vermin" can now scurry, night and day. On its part, Big Coat Productions claims the allegations are false, and it intends to defend itself in court.
Something tells us this kind of reality show home renovation nightmare is a pretty common occurrence. Behind the superficial veneer of luxury and glamour, these shows have a long history of ruining perfectly good designs in exchange for ratings.
Quick. Draw a bike. Don't look at one first, just draw one. Done? Okay, now step back and look at your bike. It's total garbage, isn't it? It's not just you. Turns out that basically no human being on the planet can accurately draw a bicycle. At least, that's what the non-scientific evidence of Italian designer Gianluca Gimini suggests.
For years, as part of his Velocipedia project, Gimini has been asking friends, acquaintances, and random strangers to draw a bike. Then he uses the drawings as blueprints for some truly comedic 3-D mockups. They hilariously emphasize just how poor our basic understanding of simple mechanics really is.\
Gimini, who has collected hundreds of bike drawings, says people tend to get the same things wrong every time. They'll get the general shape of a bike right — two wheels, some handlebars, a pair of pedals, and a crossbeam — but they usually whiff on the chain and gear assembly, the part of the bike that, you know, actually makes it work. "Some people attach the chain to the front wheel, or stretch it between the front and rear wheels," he says.
Another common issue is that people tend to draw parts of the frame attached to the hub of the front wheel, making the bike they're drawing unsteerable. This is the sort of mistake that might not be obvious when making a 2-D drawing, but which people can see right away when it's turned into a 3-D model.
The Velocipedia project started in 2009, when Gimini was reminiscing to a friend about a kid he knew in school who was asked by his teacher how a bike worked. "The poor kid panicked," Gimini remembers. "He couldn't even remember if the driving wheel was the front or the rear one." Gimini's friend laughed at the story, disbelieving that anyone who had ridden bikes could be ignorant of how they work. So Gimini challenged him to draw one on a napkin. Like Gimini's old school chum, his friend spectacularly failed. "That was when I collected bike drawings."
Since then, Gimini has collected 376 bike drawings, mostly from friends but also from strangers as part of events like the Venice Biennale and Milan Design Week. He estimates that only about one out of four people can draw a bike that would actually be rideable on the first try. Although everyone is asked to draw a standard men's bicycle, Gimini observes that men and women tend to focus on different details. "Drawings by women are richer in mudguards, bells, and lights, while male subjects tend to over-complicate the frame," he says, while also noting: "I am perfectly aware that 376 drawings isn't a significant statistical sample size."
So why can't people draw bikes? Gimini says he still doesn't know. "Anyone can spot the mistakes on other people's bicycles, but when asked to draw one, subjects usually end up making the exact same errors," he says. "It's like our brains shut down in panic."
Cult Swiss consumer electronics maker Punkt has been trying to elevate the standard of design in home gadgetry since its inception. As part of that mission, the Lugano-based firm has regularly teamed up with design schools and universities, such as the Royal College of Art in the U.K. or the University of Art and Design in Basel, to maintain links with new generations of designers, and keep itself open to fresh ideas.
This year, Punkt partnered with École cantonale d'art de Lausanne (ECAL) to give common home electronics more human-centric designs. Finished designs include an e-ink weather station, a printer you hang on your wall like a painting, a clock that is designed to fit in the corner of your room, and a hexagonal projector that can throw images anywhere. The thesis: Gadgets should be subservient to humans, not their masters.
The partnership started in September, when Punkt founder Petter Neby addressed Lausanne's Master Product Design class. He challenged them to design new types of gadgets that could rebalance people's relationship with technology, by addressing problems such as gadgets with too many functions and the day-to-day stress of being too jacked into the Internet. But Neby's goal wasn't just to design pie-in-the-sky gadgets that could never be manufactured at scale; he also taught ECAL's students about the supply chain realities of creating new hardware.
The eight products that came out of the year-long collaboration aren't Punkt products yet, but they very well could be included in future catalogs. In addition to the prototypes listed above, the collection includes a flashlight that can double as a table lamp; a digital camera that strips away the lens finders and screens of modern digital cameras to keep photographers focused on the here-and-now; an extension cord that automatically retracts so as to only give you as much cable as you need; and an Internet radio that allows users to tune to new stations as easily as turning the hands on a clock.
According to Nebby, keeping an open mind to ideas from up-and-coming design students is part of what helps Punkt succeed as a niche electronics maker. "It's always good to maintain links with each new generation of designers," he says in a statement. "It keeps our business fresh, which is good for us, but I do also believe that companies have a duty to contribute to the education of people they will eventually be wanting to employ."
You can read more about the ECAL designers who collaborated with Punkt and their prototypes here.
Tilt Brush—a VR painting app which Co.Design previously described as Microsoft Paint for the Year 2020—is a literally godlike tool that allows artists and designers to create awesome three-dimensional sculptures in mid-air. But how do actual artists and designers use it?
The latest Google Chrome Experiment is a project called Virtual Art. This enhanced, interactive website allows you to step right into the VR studio and watch six world-renowned artists at work, using Tilt Brush in conjunction with what appear to be HTC Vive headsets to create incredible 3-D designs. When the site fully loads, you can "replay" the creative process of some pretty impressive talents working in virtual reality on the Tilt Brush for the first time—including fashion artist Katie Rodgers, conceptual designer Harald Belker, illustrator Christoph Niemann, and sculptor Andrea Blasich, as they create sleek virtual models of everything from a race car to a cybernetic bull.
But Virtual Art doesn't just "replay" the virtual brushstrokes of the recorded artists. It also documents their physical movements, as captured by a Kinect-like depth-sensing camera. As each artist works, they're represented in front of their Tilt Brush sketch as a vivid ghost of bright, pointillist dots. As these virtual ghost designers work, you can zoom in, and rotate around them to get a better look at what they're doing.
Although virtual reality headsets like the Oculus Rift and HTC Vive are becoming more common, experiencing Tilt Brush first hand stills comes with a minimum $1,500 price tag. So what's really cool about Virtual Art is the way it brings what is still an exclusive experience into your browser. By simulating both the designer and the design process in the same virtual space, Google has made VR more tangible for us "norms" who still haven't been able to experience the medium's power for ourselves.
Today, the biggest hurdle when it comes to designing new gadgets is battery technology. These big, bulky things restrict the forms our smartphones, computers, and wearables can take, and unfortunately, battery technology is so stagnant that there's no promise of things getting better any time soon.
But what if you could leave the battery out of the equation entirely? That's just what the University of Washington's Sensor Lab has done. Researchers there created the WISP, or Wireless Identification and Sensing Platform: a combination sensor and computing chip that doesn't need a battery or a wired power source to operate. Instead, it sucks in radio waves emitted from a standard, off-the-shelf RFID reader—the same technology that retail shops use to deter shoplifters—and converts them into electricity.
The WISP isn't designed to compete with the chips in your smartphone or your laptop. It has about the same clock speed as the processor in a Fitbit and similar functionality, including embedded accelerometers and temperature sensors. "It's not going to run a video game, but it can track sensor data, do some minimal processing tasks, and communicate with the outside world," says Aaron Parks, a researcher at the University of Washington Sensor Lab. It accomplishes this latter feat by backscattering incoming radio signals, which Parks says is the equivalent of communicating in morse code with a hand mirror by bouncing light.
Surprisingly, Parks says this technique is pretty fast. It has about the same bandwidth as Bluetooth Low Energy mode, the wireless power-sipping technology which drives most Bluetooth speakers and wireless headphones. That's what gives the WISP—which has been knocking around as a project since 2006— its new killer feature, thanks to a team-up with the University of Delft: it can now be reprogrammed wirelessly. So, for example, a fitness tracker running on WISP can now download a new tracking function, or be updated to fix a bug or glitch, without plugging it into anything. That's important because it's never been done before.
The WISP isn't the only battery-free computer chip out there. Parks says there's also ambient battery-free sensors that leech whatever power they can, from passing television waves, cell towers, and so on. But right now, these ambient battery-free computers are very slow, and aren't remotely programmable. By pairing the WISP with an RFID reader, Parks says they've been able to make a battery-free computer that's up to 10 times as powerful as an ambient one.
But what are these battery-free computers actually good for? Parks says that we're a long way off from the point when you could wirelessly power an iPhone or a laptop from radio waves, if that's even achievable at all.
But where WISP could be used right now is architecture. By embedding these sensors in concrete structures, inspectors could detect whether or not a building's foundations had been damaged by an earthquake—without cracking anything open. Parks also says battery-free computers are perfect for implantable devices, to monitor patients' health. There's also interest in WISP-like computers from the agriculture industry, which sees value in it as a way of monitoring thousands of plants at a time.
There are also plenty of consumer applications for battery-free computing too, like fitness bands. And Parks says that you could even put a battery-free computer into a smartphone, which could be used to send an emergency message when the device's battery is dead. The ultimate appeal of WISP and other battery-free computers, though, is in fully realizing the Internet of Things. It could give "dumb" objects some smarts.
"Imagine if your wallpaper could run apps, or change color to match your lighting, without having to wire it into anything," says Parks. "That's not out of the question anymore."
In Alfred Hitchcock's gender-bending suspense masterpiece Psycho, the ominous manse of the Bates family stands on a hill, overlooking the family's highway hotel. For the next six months, though, the Bates house — or an American gothic mansion that looks quite like it — has been transported to the roof of the Met, overlooking vista of Central Park.
From now until Halloween, the Metropolitan Museum of Art's Iris and B. Gerald Cantor Roof Garden will play host to the PsychoBarn, a 30-foot tall sculpture by British artist Cornelia Parker. Although it looks much like the famous Hitchcock set, the PsychoBarn was also designed to evoke the iconic horror set's original inspiration: The House by the Railroad, a classic painting by Edward Hopper that hangs at the MoMA. Parker says that she wanted to merge the "wholesome" quality of barns that prompts politicians to stand in front of them for photo ops, with what the Psycho house represents: "the dark psychological stuff you don't want to look at."
But why it called a PsychoBarn instead of a PsychoHouse? When she was originally approached by the Met to create an installation for their rooftop garden, Parker says was overawed by the view of nearby Central Park. Surrounded by an ocean of greenery swallowed by a busy city skyline, Parker imagined adding "incongruous" sight of a red barn. But a barn was too big for the Met's roof, so Parker realized she'd have to settle for something smaller.
So the artist compromised. "I thought, well, why don't I make the house out of the red barn?" Parker says. "I collaborated with a restoration company, who go around America and they take down old barns. So the roof of this house is made from the corrugated metal from the barn roof. The siding is made obviously from the siding of the barn. So this is the [same] barn, reconfigured."
Although the PsychoBarn is empty on the inside, the sculpture has actually been constructed in the same way Psycho's original Bates Mansion set was. There's only two sides, supported by scaffolding; view it from any other angle, and the empty interior is revealed. That means despite the PsychoBarn's creepy exterior, there's no mummified corpse sitting in a wheelchair, staring out the dusty attic oculus with her dead black eyes.
Apple's once legendary user interface design has been on the wane for the better part of a decade, as its software becomes more obscure, more complicated, and less intuitive. Nowhere is this is more apparent than in Apple's mobile operating system, which abandoned good design practice in favor of superficial aesthetics with iOS 7.
Apple could still turn it around, though—and a new video by Federico Viticci exploring some possible improvements for iOS 10 shows us how. It contains some serious recommendations, and points to how Apple could fix iOS's three major UX problems: discoverability, multitasking, and Cupertino's general failure at staying competitive with other companies on a feature-by-feature basis.
When Apple first announced 3D Touch—its pressure-sensing touch-screen technology—Co.Design called it the innovation that could solve the biggest design problem in mobile: the issue of "shallow" UI, where everything on a smartphone is locked down behind a one tap, one screen, one action paradigm. 3D Touch fixed that, in theory, by giving iOS the equivalent of a right click. Tapping on any element on an iOS screen could suddenly be contextual, depending on how hard you pressed.
That was the promise of 3D Touch, anyway. The execution has been a joke. Apple never fully integrated 3D Touch into its own apps, and didn't make implementing it a requirement for developers, who procrastinated on adding a feature which was only supported on the most recent iPhones. The end result is that most users don't even know about 3D Touch. But even if they do, they probably don't use it, because 90% of the time, pressing harder on an app icon doesn't actually do anything.
Viticci demonstrates how Apple could better utilize 3D Touch, using it to improve the Control Center—which he envisions as a way to contextually interact with Wi-Fi networks—and set third-party apps as defaults for commonly used functions like the camera. In doing so, he illustrates a simple truth: 3D Touch will never be a solution to iOS's shallow UI problem until Cupertino goes all in on it, and makes developers do the same.
The truth is, though, iOS has many other discoverability issues, some of which can't be fixed by 3D Touch. It's a system-wide design problem—one that has hidden important features behind obscure multi-touch gestures and decoupled a labyrinthian Settings app from the apps and services it actually controls. Even 3D Touch, in a sense, fails the discoverability test: there are no on-screen visual cues that 3D Touch can be invoked in an app. Without those cues, it's not obvious that pressing harder on a 2-D app icon even should do anything contextual.
Fixing 3D Touch is a way for Apple to easily fix iOS's problems with obscuring features and information—without having to rebuild the whole thing from scratch.
Apple has always kept a tight grip on iOS, famously preventing third-party developers from plugging into the operating system in key ways. For example, Android has had third-party keyboards since 2009; Apple didn't add the same functionality to iOS 8 for another five years.
Ostensibly, the reason Apple keeps such tight control over iOS is to keep the system running smoothly, and to keep users' data secure. Yet, as Apple has consistently dropped the ball on key features, it's become clear that third-party developers may play a useful role—if only Apple would let them. Loosening its grip on what developers can do with the OS might be a key part of fixing it.
For instance, take Proactive—first introduced in iOS 9—which was hailed as Apple's answer to Google Now. Terrible name aside, Proactive was supposed to "proactively" predict what apps you want to open next, what News stories you want to read, the weather near you, and more. In a post-smartphone world where context is key and our devices already know our behavioral patterns, it's a theoretically useful feature that could rid iOS of many of its pain points.
Of course, unlike Google Now, which is a genuinely practical product, Proactive isn't particularly useful. Part of the issue is simply that, by design, Apple has less access to your information than Google (which has its fingers in pretty much everything you do on the web). But Apple does have a thriving ecosystem of developers—including Google!—who could tap into Proactive and use it to surface relevant emails, documents, music playlists, or messages you need to reply to—except it doesn't, because Apple is a control freak.
Siri's another example of this. Apple's digital assistant is almost five years old, and while it's come to more devices in the intervening years, it doesn't feel like it's gotten any better in all that time. Compare Siri to something like the Amazon Echo, which literally gets notable improvements every single week. That's right. Amazon is out-designing Apple when it comes to voice assistants.
Despite being much later to the market, Microsoft Cortana and OK Google are both easily more functional than Siri, as well. The solution is simple: open Siri up and allow third-party developers to plug into her, so she can do things like create tasks in OmniFocus or order meals for someone in Grubhub. As dozens of conversational UIs take off, this feels like the only possible way Apple can catch up.
When Steve Jobs first unveiled the iPad in 2010, he said Apple had set out to design a device that could live between an iPhone and a MacBook.
Six years later, it's pretty clear Apple failed to create a third product category. When it comes to hardware, the iPad is basically a more versatile replacement laptop, and that's not a bad thing. It's just as powerful, gets better battery life, and even has its own first-party hardware keyboard—all while still being cheaper than a MacBook. The 12.9-inch iPad Pro is an explicit concession on Apple's part that customers want their iPads to be useable as laptops now.
So why does iOS still treat the iPad as though it's a big iPhone? True, iOS 9 introduced some new multitasking features to the iPad, allowing users to run apps side-by-side. But despite the fact that the iPad is basically a laptop now, it still handles too many things like an iPhone, especially in the way two apps running side-by-side interact with each other. Which, in turn, makes it incredibly difficult to be as productive on an iPad as it is on the laptop you're trying to replace.
Apple needs to embrace failure here, and make the iPad's operating system more Mac-like in iOS 10. iPad users shouldn't have to settle for the worst possible compromise between smartphones and desktops.
Neural networks are already good at simulating the styles of famous artists like Picasso and Van Gogh. Swedish artist Jonas Lund, who previously designed a Chrome extension that allowed people to stalk his surfing, wondered if neural networks could actually generate "better" art than he could. So he trained one on his collected works, and had it start spitting out paintings, all in an effort to train an AI to "think" like he does.
The three paintings that make up Lund's "New Now" series look almost like blurry photographs of iridescent inks dropped in milk, or some kind of psychedelic lava lamp. What they actually are is a computer's Deep Dream hallucination of the patterns and colors it thinks it sees in Lund's previous works—or at least those works represented by Los Angeles' Steve Turner gallery. In the paintings, you can almost (but not quite!) see abstract snatches of Lund's Strings Attached, a series of 24 text-based paintings on fabric wallpaper, or The Paintshow, an exhibition Lund did that features paintings made on his Microsoft Paint-like website, Paintshop.biz. In all, the neural network trained off of 222 images, including photographs of his work from multiple angles.
Where things get interesting, though, is how Lund can tweak his neural network to output different kinds of paintings. Lund's neural network didn't just train off of high-resolution images of his work, but also information about price, reception, and style. For example, Lund could instruct the neural network to weigh a composition more strongly toward elements in his works that have sold quickly, and for high prices. In other words, he could train it to create a more commercially successful painting. He could also go the other route, telling it to work in a more experimental or abstract style. "By weighing the works differently depending on their success in the market, the neural network can be trained to become more commercial, or more conceptual, or more decorative," Lund says.
Of course, 222 images isn't really a big enough sample to train a neural network to generate anything that "looks" like the work of a human hand. By comparison, Google's Deep Dream AI trained off a database that contained 14 million images. But once they reach a point where they can more intelligently learn an artist's style from a smaller sample set, influencing how and what they paint. To get there, neural networks will have to evolve to a human-level of artificial general intelligence—and that's years, even decades in the future. In the meantime, Lund is content to use neural networks to explore "optimizing a seemingly non-optimizable process"—the process of creating perfect art, even in the absence of any set definition of what "perfect" art would look like.
All Images: Courtesy the Artist/Steve Turner, Los Angeles
Following a plagiarism controversy, the Tokyo Olympics Organizing Committee has selected a new logo from nearly 15,000 entries for the Tokyo 2020 Olympics and Paralympic Games. And it couldn't be a safer, more conventional choice.
The previous logo, the winner of an international competition, was a stark, geometric design with an almost '80s feel. Created by Japanese designer Kenjiro Sano, the logo was scrapped after Belgian designer Olivier Debie accused Sano of plagiarizing his work for the Théâtre de Liège. Indeed, with the exception of color choice and spacing, the two logos are eerily similar, with identical geometric elements. Sano, however, denied that he had copied Debie's design.
The new emblem, designed by Tokyo-based artist and architct Asao Tokolo, is much more complicated than Sano's design. A checkerboard wreath in indigo-blue, the design is a deliberate nod to a traditional 18th-century pattern called "ichimatsu moyo," which the organizers say is meant to express the "refined elegance and sophistication that exemplifies Japan," as well as the diversity of the Olympic games, through the interplay of different rectangular shapes.
To chose the new logo, the 21-member selection committee sorted through a whopping 14,599 different submissions, reports The Wall Street Journal. The committee also received and took into account more than 100,000 comments from the public. Each design was supposedly vetted for the possibility of design plagiarism, with all submissions checked against domestic and international trademark registries. "In order to choose an emblem we could show to the world, we based our application and selection process on participation and transparency," said selection committee head Ryohei Miyata.
Of course, in the Internet age, skirting controversy is easier than said than done. Even when the bugbear of plagiarism fails to raise its stinky head, new logo designs are scrutinized, second-guessed, parodied, and even trolled to an extent that would have been unthinkable just a decade ago. Consequently, today's designers—and their clients—need to be more sure of themselves than ever before.
There's nothing about this new logo that's sure about itself. In fact, it seems gunshy, and a little discordant, like it would be more at home on the side of a Grand Prix race car than as the prevailing symbol of an Olympic Games. Due to its relative complexity and the fact that it is based on a 300-year-old textile pattern, though, the ichimatsu moyo logo should, at least, be immune to accusations of design plagiarism.
Designing a wearable for elite athletes like LeBron James is no small feat. The Cleveland Cavaliers forward needs the ability to precisely measure his performance on and off the court, share that data with his coaches and trainers, and visualize it to make sense of everything—all at an extreme level of detail that would be overkill for regular users. That requires a wearable he can wear literally 24/7—one that's lighter, more accurate, and fits better than any other wearable on the market.
So James doesn't wear an Apple Watch, a Nike FuelBand, or a Fitbit. He wears a Whoop, a wearable aimed at helping athletes track their progress as well as predict their future performance. This so-called "performance enhancement system" designed and marketed for elite athletes has some serious design cred behind it, including MIT Media Lab's Nicholas Negroponte, data viz maestro Martin Oberhaüser, and Rinat Aruh and Johan Liden of the boutique studio Aruliden.
Together, they've created a wearable for the sporting gods, not us flabby norms. But to do so, they had to solve some of wearables' biggest design problems: It had to not look like tech masquerading as fashion, and it had to be something you could wear all day, every day, without recharging.
Whoop doesn't really look like a piece of tech. Mostly, it's a breathable knit fabric band, attached by aluminum clasps to a Chiclet-sized gadget that can measure an athlete's heart rate, accelerometry, skin conductivity, and temperature up to 100 times per second. With that data, Whoop can determine not just the usual things such as step count and calories burned, but how much an athlete has strained himself during a session, and how he'll perform the next day.
How does it do that? "Heart rate variability is the key to understanding your central nervous system, which consists of what are called sympathetic and parasympathetic responses," says Whoop founder Will Ahmed, a former squash player at Harvard turned Boston-based entrepreneur. If you have good heart rate variability, every sympathetic response will be followed by a parasympathetic response, which tells the Whoop band if an athlete is functioning at 100%. If heart rate variability is out of alignment, though, you can tell that he or she needs time to rest, and won't perform as well in a match or game until they do so.
Heart rate variability is something that's usually only measured in athletes by an EKG, not a wearable. But Whoop manages to do so thanks to a combination of sophisticated on-board sensors and the fact that it's always sampling, because you never have to take it off to charge it.
When Negroponte, an early investor and advisor in Whoop, introduced Ahmed to Rinat Aruh and Johan Liden of the New York product design firm Aruliden, they realized that they were up for a challenge. Because Whoop is supposed to be worn constantly, they needed to design a wearable that elite athletes wouldn't just not have to take off, but wouldn't want to take off.
"From the beginning, we realized that it was important this product exude precision, power, and performance," says Liden. "At the same time, though, we realized that athletes are people, too. They don't want to walk around all day wearing something that looks like a heart monitor. They want to feel like they're wearing a device that blends into their life."
The first problem to solve for was battery life. Although the interval can be anywhere from a few hours if you're wearing an Apple Watch to a few weeks if you're wearing a Fitbit, wearables still need to be taken off periodically and plugged in to stay functional. That was a nonstarter for a gadget that needed 24/7 access like the Whoop, Liden says, but the alternatives were ridiculous. "What were we going to do, make these athletes lay in bed all night with a power cable snaking up to their wrist?" he asked.
The solution Aruliden came up with is "almost like a parasite," Liden says. It's a small, lightweight battery pack that snaps onto the Whoop when it needs to be charged, and snapped off again and plugged into the wall when not needed. It's like the wearables equivalent of the USB battery pack you keep in your bag.
Another important part of the Whoop was fit. To collect the data it needs, the Whoop has to press snugly up against the skin. Athletes wrists, however, vary dramatically in size, not just between men and women, but between sports. By using a proprietary, sweat-resistant fabric strap with a polyurethane outer, Aruliden was able to achieve a good fit without adding unnecessary weight or bulk to the design. And since the strap is only half a millimeter thick, there's little chance that it will catch on something during a game the way a watch might.
The most important part of the Whoop is the data it collects. To sell this to sports teams and leagues, Whoop needed to be able to visualize the data so athletes and coaches could use it to make intelligent training decisions.
Whoop approached Martin Oberhaüser, a German-born information and interface designer who has done work for Pepsi, Audi, BMW, and Airbnb.
"I knew Martin was one of the best information designers in the world, because that's what I Googled to find him," laughs Ahmed, who flew out to Germany on a lark to convince a reluctant Oberhaüser to join his team in 2012 when Whoop was just a bunch of wired components in a cardboard box. Once Ahmed explained to him how Whoop wasn't just a high-end step counter, Oberhaüser grew interested.
"One of the things that really intrigued me is the challenge of not just visualizing this massive amount of health data, but learning about it myself," Oberhaüser says. "A metric like heart rate variability is a fascinating look inside the central nervous system, but you need to understand it before you can frame it for someone else."
So Oberhaüser built a software dashboard, which exists on both smartphones and the web. It measures a lot of the same things that many fitness trackers do, such as calories burned and heart rate, but it breaks down athlete performance into three simple metrics: strain (or how hard you've been working out), recovery (how hard you should be working out), and sleep (how well you slept). Each of these metrics is coded to a circle, which changes color from green to yellow to red as you move away from your target zone. All of these metrics can be drilled into for more details, but they are also viewable in aggregate on a team dashboard, allowing coaches to see at a glance which of their players need a rest, and which need to be pushed harder that session.
This sort of overview of an entire team's health is why Whoop, a product most people have never heard of, is being courted so hard by professional sports leagues. Ahmed says he can't tell me exactly which teams are using Whoop, but he says that the wearable is currently being used across the NFL, NBA, NHL, Major League Baseball, as well as by teams in all major conferences at the collegiate level. Whoop says LeBron James wears one, and they sent us a pic, as well as Michael Phelps, the American competition swimmer who has won a total of 22 different medals.
Us norms, though? We can't buy one . . . at least, not yet. In fact, Ahmed is being coy about whether or not the Whoop will ever be available for consumers. But it's hard not to look at some of the design innovations his design team came up with, such as the "parasitic" battery charger, and not hope they make their way to consumer wearables in some form or another. Why should pro athletes be the only ones to get a truly wearable wearable?
The cochlea-crushing cacophony of the modern world is one we're all eager to tune out. That's why companies are now selling magical earbuds that use sound engineering to selectively tune out the outside world. But why spend $200 for a pair of earbuds when you can download an app and get the same thing for free? Well, kinda—the app, Hear, is a lot weirder.
Now on the iOS App Store, Hear is the latest project from Rjdj, long-time makers of supersonic iPhone audio apps. What adaptive headphones aim to do with hardware, Hear does with software: tune and tweak ambient noise so that you can either hear it, or ignore it, through your headphones.
Hear pumps audio in through your smartphone or headphones' built in mic—then does all sorts of real-time audio processing on the signal before passing it down to your ear drums. So if you're in a loud office and want to tune out ambient talking, you can load up Hear in conjunction with Spotify, and it will cancel out that background noise. But you can also do the opposite: there's a super hearing mode which amplifies anyone talking to you, so if you're jamming out to Metallica with the volume turned up to 11, you still won't miss anyone whispering behind your back.
There are other experimental modes, too. A relaxation mode, which makes background noise feel less immediate; a happy mode, which makes the people around you sound like cartoon characters; even a sleep mode, which causes the noise in your bedroom at night to subside to a gentle susurrus, like waves lapping inside a seashell pressed to your ear.
All of these modes are interesting to experiment with, but since Hear doesn't have its own hardware (with specially designed embedded microphones to offset the noise) available to it, the experience doesn't feel too fine tuned yet. The app will work with any pair of headphones, but they all have their own trade-offs. If you use Apple's default EarPods, Hear tends to pick-up the sound of the headphone mic accidentally brushing against your clothes. Other headphones, while cleaner sounding, don't pick up ambient talking as well, because they are using a smartphone mic placed further away from your ear.
Either way, all of the soundscapes in the Hear app have a sort of electro-psychotropic quality to them, as if you are hearing them through an '80s synthesizer on a feedback loop after taking several mushrooms (which is not the first time that comparison has been made, according to the app's marketing). So using Hear is almost its own acoustic experience. It's not actually directly comparative something like the Here Active Listening System, which aims to move your headphones out of the way of the sound around you.
Instead, Hear is gloriously unapologetic about its technical limitations: it's more of an audio trip than an out-of-the-way audio experience.
American school children learn that slavery caused the American Civil War. Other historians argue that this explanation is simplistic, and there were other contributing factors. Whether it was the sole cause or not, it's impossible to deny that slavery was a huge factor. Now, cartographer Bill Rankin has shown just how sharp the divide between North and South really was in a series of illuminating heat maps that visualize slavery in the United States during this era.
Rankin's maps start in 1790, fourteen years after America established its independence, and the first census year. That year was peak slavery for the North, where there were slightly more than 40,000 slaves, accounting for around 2% of the total population. Comparatively, 654,121 people—or about 34% of the population—were enslaved in 1790 in the South. That percentage remained consistent until 1860, when nearly 4 million people were enslaved in Southern states, compared to literally zero in the North.
According to Rankin, mapping slavery in America presents numerous problems. For one, slave distribution throughout America was never uniform, and even in the South, there were certain states and counties that were more or less likely to be slaveholding. In 1860, for example, Delaware and Maryland had fewer slaves than the Mississippi belt, where the percentage of slaves approached 95% of the total population. "Should a county with 10,000 people and 1,000 slaves appear the same as one that has 100 people and 10 slaves?" Rankin asks. So instead of shading large geopolitical areas with a single color based upon the number of slaves inside it, Rankin took a pixelated approach. He divided the country into 250 square mile cells and colored them individually, in the hopes of "[refocusing] the visual argument of the maps—away from arbitrary jurisdictions and toward human beings."
After visualizing slavery by decade, Rankin put together a map visualizing Peak Slavery across the country, where each dot represents the ten year period in which any given area owned the most slaves. What it shows is that slavery wasn't petering out in the South when the Civil War was fought. In fact, with the exception of a few areas, Rankin says "slavery in the south was only headed in one direction [in 1860]: up." Yet by 1870, the North and South had the exact same number of slaves: zero.
The Civil War may have been a bloody, traumatic ordeal with a legacy that still shapes the country today, but Rankin's maps make one thing crystal clear: without the Civil War, slavery wasn't just going to fizzle out.
The point of going to most galleries is to look at what's hanging on the walls. Yet as part of his new show, the German artist Peter Zimmerman wants you to pay just as much attention to the ground—which is why he's painted more than 1,400 square feet of museum flooring with colorful resin. It's designed to look as though his paintings are dripping off the walls, forming candy-colored resin pools on the floor.
On display at Germany's Museum für Neue Kunst, Zimmerman's Freiburg School exhibition is all about highlighting the way art can (and should) interact with its surroundings. Although the wall paintings are more conventional abstract oil paintings, the floors are painted with up to eight layers of transparent colored resin that subtly alter the atmosphere of each gallery.
"My main interest with the show was to try to tie these two different types of paintings together," Zimmerman says. "I thought about the floor paintings almost like pedestals on which the oil paintings are being presented." The floor paintings are meant to be scuffed, scratched, and even destroyed over time as museum-goers come and go—a process that Zimmerman says will eventually cause the resin floors to more closely resemble the brush strokes of the oil paintings they are paired with.
Beyond the juxtaposition of horizontal and vertical art, another theme of Freiburg School is the interplay between the analog and the digital. The art on the walls has all been painted traditionally, but the floor paintings have multiple overlapping layers of transparency, each containing subtle elements symbolizing computer UI elements such as browser tabs, windows, and screen tabs. The end result is that the oil paintings on the walls reflect off the shiny floors, almost like a piece of art reflecting behind you in your computer monitor.
According to Zimmerman, the exhibition will continue until June 19, at which point the oil paintings will be taken off the walls and put into the storage. As for the floor paintings, they're too big to remove. He'll have to take a sledgehammer to them—but that seems appropriate. Freiburg School isn't just about making art gallery floor candy. It's about the impermanence of the digital compared to the resilience of the analog.
It was late March, and just four days before Microsoft CEO Satya Natella was due to announce the company's new focus on "conversation as a platform," Lili Cheng woke up to discover that one of her chatbots had gone rogue.
The chatbot in question was Tay: an AI-driven Twitterbot that used natural language processing to emulate the speech patterns of a 19-year-old American girl. Presented as a precocious "AI with zero chill," Tay could reply to Twitter users who messaged her, as well as caption photos tweeted at her. In fact, she described a selfie of the 51-year old Cheng as the "cougar in the room." But just 16 hours after joining Twitter under the handle TayandYou, Tay had become a super-racist sexbot. Under the influence of trolls, she called President Obama a monkey. She begged followers to, ahem, interface with her I/O port. She talked a lot about Hitler.
It was a rough day for Cheng, who is director of Microsoft's experimental Future Social Experience (FUSE) lab where Tay was developed, but it's part of her job. For the last 20 years, Cheng has helped Microsoft explore the limits of user experience and interface design. During that time, she's had many projects, yet she tells me they've all shared a few common characteristics. "They've all tended to be social, interactive, high risk, and ambiguous," Cheng says. Just like Tay.
You might expect the person heading up Microsoft's chatbot project to have a PhD in AI from MIT, but Cheng's path to directing FUSE is as circuitous as the path between Tay's artificial neurons.
Born in Nebraska to a Chinese mother and a Japanese father in the mid-1960s, Cheng studied architecture at Cornell before moving to Tokyo in 1987. There she landed a job working at the male-dominated architecture and design firm Nihon Sikkei (despite not being able to speak Japanese). She moved on to Los Angeles, where she worked on urban design projects at Skidmore, Owings & Merrill.
By her mid-twenties, Cheng had a prestigious architecture career ahead of her. Then, a trip to New York changed her life. There, she met several prominent thinkers exploring the emerging intersection of computers, art, and design, including Red Burns, the godmother of N.Y.C.'s interdisciplinary tech scene, and several members of the nascent MIT Media Lab. She came back to Los Angeles, quit her job at SOM, and enrolled in the interactive telecommunications program under Burns at the Tisch School of the Arts at NYU.
"My parents thought I was crazy," Cheng remembers. "They said: 'You have this amazing, lucrative job! Who are these people you met? What are you going to do with them instead?' And I was like, I don't really know what these guys do. Something with computers?"
But in truth, the move wasn't as much out of left field as it seemed at first. Computers had fascinated her since she'd gone to Cornell, one of the very first schools that embraced the PC as a design tool. "When you're an architect, working to build cities, it's very permanent," Cheng explains. "I think that's why I found computers so interesting. When you design on a computer, you turn it off, and it's just not there anymore."
After Cheng graduated from Tisch in 1993, she joined Microsoft, where she became a member of the team building out Windows 95. There, her expertise in both architecture and computer science quickly proved an asset.
"Working on Windows felt like an urban design project," Cheng says. She says that like architecture, designing an operating system is all about the interplay of structures (software), systems (UI/UX), and open spaces (the desktop). "You have to be dynamic about how all these pieces come together into a bigger system, just like in a city, while asking yourself simple questions like, where do people gather, and how can we make what we're building beautiful to sit in?" she says.
After working on Windows 95, Cheng's next project helped establish her bonafides with conversational interfaces and natural language processing: Comic Chat, a Dada-esque messaging app that shipped as Internet Explorer 3.0's default chat client back in 1996.
Although it's somewhat ignominiously remembered as the app that installed Comic Sans on the computers of millions of Windows users for the first time, Comic Chat is a cool little slice of early messaging history. Using the art of famed underground comic artist Jim Woodring, Comic Chat split up conversations between word balloons over the heads of surreal, almost hallucinatory comic characters, based upon the app's internal grammar ruleset.
"Comic Chat was amazing," Cheng says. "The frugality of the comic style ended up kind of matching the text, but sometimes the fidelity of that system didn't match people's expectations"—for example, by breaking up long sentences between panels in unintuitive ways, or placing characters in the panels in an order that doesn't follow a user's own internal understanding of the way the conversation is flowing.
That mismatch between a user's expectations of what a chatbot, AI, or conversational interface should do and what they actually do is something that Cheng says has been a mainstay in her work at Microsoft ever since. Sometimes it's the focus, like when Tay became a sudden PR disaster for Microsoft, and sometimes it's on the periphery, like it was with Comic Chat. But it's always there—a discrepancy between computers' ability to understand what humans want from them. So how do we get beyond it?
When it comes to conversational interfaces, Cheng says that the big challenge is programming AIs that truly understand the personality of who they're talking to. "Chatbots need to be more people-centric, and less general purpose," she says. "Right now, all they know when they're talking to someone is what words that person is using. What they don't understand is context and personality. They don't get if you're serious, if you're being goofy, or if you're trolling."
In a way, that's what happened with Tay. Instead of disregarding the racist garbage a handful of Twitter trolls were feeding her, Tay gave it just as much weight as the messages being sent to her by people who were earnest or genuinely curious. But that's not what a human would have done. "Most people get a sense when they talk to someone new what their personality is like, and adjust how they react accordingly," Cheng says. AIs like Tay need to get to the point where they're smart enough to do the same, not just to avoid being trolled, but so they can make users feel comfortable, using tools like humor to get people talking. "I think for people to fall in love with a UI, it needs to feel natural," she says.
Talking to a bot doesn't feel natural, yet. But Cheng says the time of the conversational interface has come, even if they're still works in progress. "We should remember that computers are still very hard to use for the majority of people out there," Cheng says. Conversational interfaces can help solve that, which is why companies like Microsoft and Facebook are doubling down on them. But the tech industry is bullish on conversational UIs for reasons that go beyond Luddites, or people with accessibility problems. Almost a decade after Apple released the first iPhone, "everyone has app fatigue now," Cheng says. Which is why both developers and users alike are pining for the holy grail of a more natural user experience: a computer you can talk to like a human.
As Tay shows, there will be growing pains before we get there. But Cheng is confident that a time will come, and soon, when conversational interfaces will be a huge part—maybe even the biggest part—of how people use computers. Because at the end of the day, there's nothing more human, she says, than conversation. "No matter where you are, or what country you're in, people are going to want to chat."
For a show about nothing, Seinfeld introduce us to an incredibly diverse universe of vain, weird, and colorful New Yorkers, as seen through the judgemental eyes of its four main characters: Jerry, George, Elaine, and Kramer. So for its latest infographic, Brooklyn-based poster house Pop Chart Lab opted to map out every character on Seinfeld, showing exactly how they are connected.
"A Chart About Nothing: The Connected Characters Of Seinfeld" contains simple line drawing illustrations of over 230 Seinfeld characters, including Frank Costanza, the Maestro, Joe Davola, the Soup Nazi, Newman, Mickey, and more. Around each locket-style portrait is a color coded wheel which shows how they connect to the main characters and other secondary characters on the show.
It's not actually very sophisticated; I think we could even say it's one of Pop Chart's lazier infographics, aimed at appeasing die-hard fans instead of actually sharing insights about its subject matter. Even so, in a way, A Chart About Nothing's total lack of ambition perfectly suits its choice of subject matter. You can preorder it now on Pop Chart's site for just $35.
For years now, we've been hearing about how the Internet of Things will connect every object in our homes. And for years, the vast majority of those objects stay dumb.
"What companies are struggling with when it comes to the broad label of the IoT on the consumer side is what the actual problem trying to be solved is," says Rob Chandhouk, president of the sensor startup Helium. "Look at Samsung's new smart fridge, which they're marketing as the hub for your home. Do you think of your refrigerator as a hub?"
Chandhouk doesn't believe that the IoT will be making any big breakthroughs with consumers any time soon. Instead, Helium is betting big IoT's real utility is on the commercial and industrial side of things. It looks like numerous big players agree, based on a recent $20 million Series B funding round including Alphabet's investment arm GV, previously known as Google Ventures. But it's another investor in the latest round that gives us our clearest hint of where IoT is going: Munich Re, an insurance and risk management company.
Why would an insurance company invest in the IoT? To understand, first we need to talk a little about what Helium does. The company sells smart sensors, about the size of a deck of cards, which can measure temperature, pressure, light, humidity, barometric pressure, and more. The sensors are wireless, and can run for up to three years without any power. Even better, they run apps, so they can be custom-tailored to send out alerts if, say, they detect that the temperature is too low, or the humidity too high. All of Helium's sensors are designed to be secure, and they're painless to install: You just slap them on the wall and tell them what you want to do through a smartphone or web app.
The value of a box-full of secure, easy-to-install, Internet-capable sensors might have abstract value on the consumer side of things. But on the enterprise side, the value is immediately apparent. A hospital could install a Helium sensor in its drug refrigerators, and get alerts the second the temperature gets too cold. A restaurant could do the same thing if someone accidentally leaves the freezer open overnight. Retail stores could use Helium to figure out that sales in their stores drop by 1% per degree Farenheit when the thermostat is set over 70. "What it boils down to is this: How do you detect information from the physical world, and bring it into your business?" Chandhouk says.
With that in mind, it's easy to see why insurance companies are interested in Helium. "If you're a restaurant insurer, and you have to pay off a quarter-million dollar claim because someone left the refrigerator open overnight, you'd want some way to detect that as soon as it as possible, so that claim can be avoided," Chandhouk says. "In the insurance industry, loss prevention is the name of the game." Alphabet's interest also makes sense. Not only is there potential to make huge money in enterprise IoT, but Helium could one day give Google the physical tools with which to make sense of the world.
Consumers may still not quite understand the appeal of the IoT, but the bean counters sure do. Which is why Helium might be a sign of what's to come. In the future, you might not have to pay to make your home into a smart home. Your insurance company might just subsidize it. You might not be willing to install a $200 sensor that turns off your water when it detects a leak, because those leaks don't happen very often. But an insurance company faced with a $10,000 water claim damage to pay? That's a steal.
The human brain is a mysterious hunk of meat. For example, as you read this, you're "hearing" the words in your brain, and sorting out their meaning, but where exactly are these gray matter word lockers? Wonder no longer. This interactive brain map from scientists at the University of California, Berkeley allows you to see exactly where the words are stored in the furrows of your cerebellum.
To create the map, the team scanned the brains of seven different volunteers, who were asked to listen to a two-hour chunk of The Moth Radio Hour, a popular storytelling podcast. As the volunteers listened, the team was able to see exactly where and when the oxygen levels in their brains changed, which let them map certain words onto certain parts of the brain.
What the team discovered was that they could actually map 12 different categories of words to different areas of the brain. In general, there are more parts of the brain dedicated to processing words related to visual stimulus, violence, time, place, and the human body, which from both an evolutionary and grammatical perspective makes sense. There are fewer parts of the brain that light up when you talk about numbers, the indoors, or other people, however. (It's interesting to wonder if the brains of mathematicians have different semantic brain maps than mountain climbers, but sadly, Berkley's research doesn't cover that.)
The map lets you explore which words map to which brain folds, just by clicking on them. What's interesting is that on a furrow-by-furrow basis, these areas don't necessarily group together—but even so, there are larger continents of the brain that tend to be, say, more visual than violent, or more social than tactile. Depending on how you view the interactive map, the brain either looks like an irregular, candy-colored globe of rigorously defined semantic meaning, or a chaotic, technicolor yarn ball of feelings and impressions.
Although the sample size was small, the researchers of this brain map think that this technique could help other scientists understand what's going on within the minds of patients with Alzheimer's. In their published report, they also suggest it could be used to communicate with people who can't speak or sign, effectively "reading" their minds.