I’d like to talk a little bit about The Future and what we think The Future looks like.
When you think of The Future—be it thirty years from now or three hundred—what is your mental picture? Electric cars replacing gas-driven cars? Driverless cars replacing drivers? Flying cars? Holograms? Putting a tiny pizza into a magical Hydrator that makes it ready to eat in five seconds? Interconnected displays for all your social media needs in every room of the house? Holographic displays, because those are cooler? Accessing the internet with your brain? Face-mounted displays like Google Glass or Oculus Rift being the norm rather than a rarity? Spending a great deal of time in a VR environment with photorealistic virtual avatars that may or may not actually represent the real you? Will the internet still be like the one we know today or will it look and operate a lot differently, whether from technological advances or the interference of government regulation or capitalism? Will we all have our own personal AI assistant?
When you think of The Future, do you only imagine advanced technology that a wealthy, end consumer in a first-world country would own?
Or do you think of how Miami will look after climate change has put it under water? (Thankfully, Seattle will “sink” much more slowly even in the direst predictions.) Or the world finally solving hunger with better food waste management programs in countries with excess food and agricultural breakthroughs for countries with a depressed economy, poor topsoil, or little water? Or public opinion turning against alternative energy sources because lobbyists convinced everyone they’d lose their jobs? Or of the distinct possibility that a well-coordinated cyber attack could do just about as much damage to a nation’s infrastructure and economy as a major military engagement?
“Not everything is going to change in the space of a few years, even though it sometimes feels as though change is happening at an accelerated rate these days. We might get driverless cars on the streets, but we won't get rid of drivers any time soon.”
—10 Ways To Create A Near-Future World That Won't Look Too Dated - Charlie Jane Anders
To me, The (Near) Future does not equal “Things I Saw on Sci-Fi Shows That I Wish I Could Own” or “All The Tech I Use Today Has Been Replaced with Magic” or “I’ll Be Enlisted in Star Fleet." Of course, tech growth will happen, but many of the tech-growth articles I came across while writing Eidolon gave me the sense that we’re reaching a “wall” or plateau, that tech growth isn’t as fast or uniform as it may seem, and that a lot of tech growth will be either be invisible to the average end consumer or won’t impact their day-to-day life.
The rate of tech growth is not and will not be consistent across all aspects of life. With my smartphone, I can play video games, stream media, read the news, take photos, check the weather, search the internet for encyclopedic information, buy movie tickets, access my bank account, and talk to anyone whose phone number, email address, or social media username I can get my (virtual) hands on. I didn’t use to be able to do all of that from one device, if at all. But I still own physical books, which have been around for centuries. I still buy physical books. People still read newspapers. I still drive a car, like my parents and grandparents have done. I still play board games as well as video games. I still wear glasses and have medical problems that existed before me and that science hasn’t fixed.
And even though smartphones do so much more than make calls, we still retain (for now) their origin in their name: (tele)phone.
“Bear in mind that not all technological change happens at the exact same rate. We tend to assume that because computers and related technologies have had an incredible revolution in the past 20 years, all technologies will behave the same way in the next 20. […] Nothing looks dated faster than a near future with unrealistically astonishing advances — just look at the legions of 1960s stories that had us colonizing space by the 1990s.” (emphasis mine)
—10 Ways To Create A Near-Future World That Won't Look Too Dated - Charlie Jane Anders
The rate of tech growth is not and will not be consistent in every part of the world. Reasons will vary. Sometimes entrepreneurs and businesses don’t see enough profit to invest in it, cannot invest in it, are forced to play catch up, or fail to catch up and lose market share. Sometimes governments ban research in a certain area (e.g. human cloning). Sometimes lobbyists and other “special-interest” groups throw a tantrum and slow down advancement.
To stick with smartphones as an example, the Tap-To-Pay functionality is not a widely supported (and therefore not a widely used) feature in the US, but when I lived in Tokyo a decade ago, it was easy for people to tap their phone to pay their train fare or pay for their bento lunch at convenience stores. Japanese mobile phone carrier DoCoMo invested in the underlying infrastructure needed to make Mobile Wallet work—namely, working with handset makers on universal specifications and pushing large merchants to accept tap-to-pay payments by funding the checkout terminals. Five years ago, Google finally caught up to DoCoMo with its announcement of Google Wallet, now known after a merger as Android Pay. Only last September did Apple finally announce Apple Pay.
On the other end of the tech growth spectrum, a lot of end consumers in the world don’t have access to the technologies that I do, from the luxurious (smartphones) to the basic and necessary (clean water). That’s not even a developing-nation problem. A famous city in first-world USA, Flint, Michigan, has been experiencing a major water crisis after city management was too cheap to add relatively inexpensive chemicals to the more corrosive Flint River water and prevent it from leaching lead from pipes and fixtures.
When you think of The (Near) Future, you might picture something like Google Glass or the Apple Watch or an Oculus Rift headset replacing smartphones, but I question the likelihood for mass adoption of Google Glass and the Apple Watch, and whether they actually offer something a smartphone can’t compete with, something that would overcome the smartphone’s “stickiness." I think of how long it actually took to get smartphones to where they are today.
I think of telephone operators and party lines, single landlines in homes, payphones, answering machines, cordless headsets, car phones, early mobile phones and pagers and PDAs with their easily lost styluses, the Japanese leaping ahead with early smartphones at the turn of the 21st century, then a few years later, the first of the most widely adopted early-form smartphone in the US, the Blackberry, and the first iPhone in 2007, just nine years ago.
Companies began pursuing mobile phone technology in earnest after World War II. Only in the early 70s was the first handheld version demonstrated. Early mobile phones were enormous, expensive, and not entirely useful before the adoption of cell towers and cellular networks nearly a decade later. You’ll always find early adopters for new tech, but mass adoption was still far off, and mobile phones didn’t hit that sweet spot of useful and affordable until the early 90s.
Twenty-five years later, mobile phones have improved a great deal. Touchscreens, stronger materials, more powerful batteries that charge faster, more functionality, and far larger information transfer rates (e.g. 3G, 4G) that better support data streaming.
An entire human generation spent honing one technology.
When first conceiving Eidolon, I asked myself, “Will humanity chuck smartphones for something else in the next thirty years? Or, more realistically, will smartphones coexist with another technology that is slowly replacing them the way mobile phones have led some consumers to disconnect their landlines? Or, just as realistically, will we continue to hone smartphone technology so it can do more things, better, faster, and cheaper?”
I thought of what current, early-form tech might challenge smartphones’ dominance in ten or twenty years. How will people text each other using the tiny screen of their Apple Watch? Will texting/Tweeting as a method of communicating disappear? Is that a practical prediction, or is the Apple Watch simply an expensive way to be notified of new email three seconds sooner?
Would something similar to Google Glass be all the rage, despite the troubling trend of tech companies unilaterally grabbing all rights forever to any videos, photos, status updates, and other media captured by users of its products? First, you should know Google Glass development has endured a number of the hiccups—negative reactions based on privacy concerns, the project’s founder leaving Google, an update that only worsened Glass’s utility, and partner companies/suppliers backing out—all leading to comments about starting over “from scratch.” But assuming Google or someone else makes it past these initial hurdles, how would Future Google Glass solve the problem of having no easy, silent input interface like a keyboard, a touchscreen with virtual keyboard, or some other physical controller?
Despite how much we love holograms (and maybe mistakenly think they’re easy IRL because we’ve seen them on TV), projecting holograms is already a difficult problem (i.e. “an intricate series of plates and lasers”) without adding in the need for a method to accept input. Would we use a laser-projected 2D keyboard, kind of like this? Pretty cool! And it’s something the protagonist of Eidolon uses at her desk at home. Projected-keyboard tech has been around for a few years, though, and hasn’t caught on much. And it can’t just make a keyboard in mid-air for you. A flat, stable surface is required.
Google Glass could eventually advance enough to create an Augmented Reality (AR) keyboard for you, which it can’t right now, but on top of that, Google would have to solve at least two more problems: 1) distinguishing which keys you’re hitting on the AR keyboard—and you could never turn the Google Glass camera away from your hands, and 2) giving Google Glass’s AR keyboard some way to responsively stay in place, such as a target outline or a special dot. Otherwise, the keyboard would move every time you breathe, adjust your position, or sneeze. How would that work in real life? Would you pop a special sticky-dot off your Google Glass headset and stick it on the nearest surface in order to type out a text message? What happens if you lose your dot? What if you’re at the club in the middle of the dance floor and your friend at the entrance wants to know if you want a drink on their way in? Where on the dance floor is a surface you can apply your dot to? Will the club lights interfere with the Google Glass camera’s ability to distinguish your hand movements as you type a reply? You could try to reply with voice commands, but the music might be too loud.
These are very difficult problems that mere processing power can’t overcome on its own. Currently, other related technologies have similar limitations. Siri has trouble picking your voice out from others’ voices, or even from moderate background noise. Shazam is not great at detecting which song is playing in earshot of its microphone. Microsoft’s Kinect, as cutting-edge as computer vision got when it first debuted, is pretty bad at detecting depth. A group of researchers tested how well the Kinect parsed American Sign Language and found the Kinect couldn’t consistently distinguish small finger movements nor could it do much with facial expressions, which provides additional context in ASL.
Would we navigate with our eyes then? Type with our eyes? Is that practical? Will more and more people show up at their optometrist with severe cases of eye strain? Would we increasingly rely on voice commands to navigate and dictate? What would you do when you want to browse 2030’s version of Facebook in public? Would you be willing at all times to use voice commands in earshot of strangers? “Cortana, send a text to my mother: ‘I think dad is drinking again.’”
Come to think of it, will the average consumer ever stop feeling embarrassed giving their smartphone a voice command in earshot of strangers?
I thought of other technology that has persisted for decades, even centuries. Print books (despite the internet and e-books), eyeglasses (despite contacts and LASIK), locks and keys (despite card-readers and biometrics), and cars. We’ve come a long way from the Ford Model T, first manufactured and sold over a hundred years ago, but cars are still a major transportation method. So are trains, buses, boats, and airplanes.
Necessity is the mother of invention, as they say, and if my phone’s functionality isn’t greatly improved by putting it on my face, whether as a HUD (Heads-Up Display) or virtual reality, then it’s not going to replace smartphones in the next twenty or thirty years. Likewise, as long as webcams continue to fulfill their function for a lower cost than holographic teleconferencing, we’ll still have flat displays as well as cameras on our devices.
Rather, I think smartphones will continue to take on functionalities, such as slowly replacing credit cards the way credit cards have eroded the use of checks. Smartphones and tablets will continue to eat into e-Reader market share. The built-in “intelligent personal assistant,” like Siri, will become smarter and able to respond to more complex questions and tasks. “Siri, will I make it to my flight?” “So far, yes, but the security line closest to your gate currently has an average wait time of twenty minutes and is growing longer.”
Smartphone hardware will continue to improve, such as waterproofing being the norm rather than a rarity and even flexible or transparent displays. Smartphones might even roll up or have an alternate, wearable form. Battery life, size, and charge rate will improve. Data storage will improve. Camera resolution will improve. Wireless coverage will improve, not just out in the country but in subway tunnels and deep inside large buildings. Data transfer rates will improve (“Now available in ultra-speed 12G!”). Smartphone operating systems will offer dual (or more) profiles, similar to PC user profiles, allowing you to keep your personal apps, contacts, and log-ins separate from your business apps, contacts, and log-ins. Smartphone software/hardware security will continue to improve, but so will the techniques hackers use to get around that security.
No, I don’t think we’ll be accepting chips in our brains and using them in congruence with our Google Glass to type and otherwise access the internet within the next generation. Possibly not even in my lifetime. Possibly never. Currently, any internet-connected technology you can think of has security risks. I doubt that will ever change. Hackers already use zero-day exploits, malware, social engineering, and a plethora of other techniques to commit theft and mayhem. Head on over to ArsTechnica’s “Risk Assessment / Security & Hacktivism” section to see the latest instances of security bugs, DDoS attacks, and cyber terrorism. Why then would end consumers race to stick a chip in their brains that could be used against them in a myriad of ways? What if someone used your Google Brain Chip to induce a seizure? Y’know, “for the lulz?” Or, if we want to get really speculative about it, to steal a first-person recording of you having sex with your spouse? Or implant a recording of someone you care about being murdered? By “you?"
That said, brain chips are not a bad idea when it comes to research and specialized medical treatment.
Then again, consumers of today generally don’t understand computer security or account privacy. Perhaps in another generation, we’ll care more. Perhaps not. Perhaps we’ll all sign up for seizure-inducing brain chips despite the nigh-inevitable harm we’ll open ourselves up to.
No, I don’t think we’ll be uploading our brains to the internet or talking to artificial intelligence within the next generation. Possibly not even in my lifetime. Possibly never. As I’ve said in a previous post, emulating the human brain and creating artificial intelligence are extremely difficult problems. Sure, some scientists are optimistic, but you’ll also find plenty who aren’t. That said, I won’t kick myself if I’m wrong. It’d be super exciting! Or maybe the machines will kill us all, in which case it’d still be a different kind of “exciting."
Other media has tried to predict near-future technology. Blade Runner (1982) imagined video payphones and dash-mounted phones in 2019, but not mobile phones. Back to the Future Part II (1989) thought we’d all have multiple, wall-mounted fax machines in our homes in 2016. We don’t. Fax machines are still around, of course—I had to fax home-buying paperwork to the bank last year, which meant I had to visit a Kinkos—but I also had occasion to scan and email some documents as well as virtually sign digital documents. Imagine that: older technology lingering alongside newer ones.
Yes, The (Near) Future will surely have some tech that 2016 me will ooh and ahh over, but Moore’s Law does not directly lead to holographic AI assistants or humanity uploading themselves into the internet and leaving behind mortality. The slow, grinding wheel of politics and engineering conundrums and corporate competition and limited resources—the limitations of physics—means “sheer processing power is not a pixie dust that magically solves all your problems.” (source)
You think flying cars. I think of how difficult an engineering problem that has been for decades. I think of how dangerous it already is to drive a land-bound car, how much certification is required to fly planes or drones, and how flying cars would only lead to more people crashing into houses or businesses. I think of how they’d need to be limited to a certain height, which could lead to people hacking past that limit, or else flying cars could pose significant risk to incoming and outgoing flights to the nearest airport, not to mention people illegally parking on top of houses, businesses, and even skyscrapers. I think of the potential for a high number of casualties from an airborne multi-car collision. I think of the exorbitant initial price tags for flying cars and how most people won’t even be able to afford them.
You think electric cars. And I would love that! But then I think of, y’know, oil lobbyists. I think of the amount of infrastructure that would need to change in order to cater to electric vehicles, which currently only get you somewhere between 68 and 92 miles before running out of juice. (The 2017 Bolt promises to double that.) And then you’d need to sit and recharge for several hours. What are your options when you want to drive out to the Cascades for a hike? Or drive to a city with no electric-dedicated parking spaces? Do you leave the electric car at home and borrow your friend’s gas-driven car? Until electric vehicles improve their battery charge rate, their max driving range, and the related infrastructure needed to “fuel up," even their very competitive price tags won’t offer enough incentive to switch. It’s kind of a toss-up in my mind as to whether they’ll get a serious foothold in the market within the next ten to twenty years.
You think “smart home” or the “Internet of Things.” I agree some early smart devices will see mass adoption—Lily in Eidolon uses an app on her phone to start a pot of coffee in the morning, and her fridge orders home-delivered groceries for her—but so far, smart devices are more buggy, security-hole-ridden gimmick than anything else. I think of those early adopters with hilarious stories of smart bulbs DoS-ing their whole house. I think of smart devices being commandeered to form a spam-sending botnet, of hackers using security bugs in smart devices to take over them remotely. I think of people unaware that their internet-connected webcam is beaming images of their sleeping baby to anyone who knows how to find it. I think of the data from internet-connected, health-tracking devices like FitBit being used by people other than the user and their doctor: for example, the government, advertisers, employers, insurers, and potential dates. I think of how 70% of the 25 billion internet-connected devices currently online “contain serious vulnerabilities.”
In that vein, you think “smart lock.” Let’s say your front door knows to unlock once you’re a certain distance from your door (based on your phone’s GPS location), or once you’ve held your phone up to the lock (using a RFID chip in your phone). What if your phone runs out of power? Or breaks? Or is stolen (because a mugger is way more interested in your expensive smartphone than a set of house keys)? What if a security update for your smart lock ends up breaking it? What if whatever powers it, whether an electrical connection or a set of batteries, dies? How do you get into your house?
Does your smart lock have a back-up plan, like biometrics? Unfortunately for you and me, biometrics aren’t actually very secure because they’re easily obtained or faked. A team of German hackers got around Apple’s Touch ID in less than 48 hours. Will your back-up plan be a regular key? Makes you wonder why you bothered with a smart lock in the first place if it only saves you about five seconds and really isn't any better at securing your home when a burglar could simply bust open a window. And what about people in apartment buildings? How likely is it that an average, affordable apartment building at the early onset of smart-lock adoption will care to invest in them? How likely is it that they’ll tell you to use a physical key and shut up about it? How likely is it that a criminal will use a security bug in the brand of smart locks your apartment building installed and rob multiple apartments in one night?
You think holograms. I think of how difficult an engineering problem that has been for decades. Getting a beam of light to “stop” and “create a pixel” means introducing something to stop it in its path. Are holograms going to be limited to the dimensions of a glass case? Are holograms going to have to rely on optical illusion rather than true 3D? Moreover, recording images for a hologram to project is another difficulty. That requires multiple cameras shooting the subject from several angles. Otherwise, the projection would have to “fuzz out” whatever it can’t see of its subject (in which case, what’s the point?) and/or use complicated math to “predict” the rest of the subject’s form. That kind of technology seems at least ten years off, even at a clip.
You think artificial intelligence and personal AI assistants like Jarvis. I think of how difficult an engineering problem (both software and hardware) that is and has been for years. I think of how scientists don’t even agree on what artificial intelligence is or what it will look like, assuming we’ll even know it when we see it. (Because we might not.)
You think virtual reality. I think how, in a way, we’re already experiencing early-form VR when we look at any screen and trust it to intermediate more and more of our lives, to show us a “reality,” whether as the news, directions, or accurate search results, with not nearly enough thought as to whether those screens are deceiving us.
I think of how the VR headsets have taken decades to develop and improve. I think of Nintendo’s first attempt at a VR headset, Virtual Boy, back in 1995. It became Nintendo’s second-worst-selling console. I think of how VR has already been used in medical and military training. I think of both the real and potential entertainment value of by the VR headset Oculus Rift, which has only been available to developers for the past three years (a consumer ship-date is set for July 2016), but which retails for a whopping $599 and thus would need to become cheaper before it enjoyed wide adoption. About half of all US households have a game console, though, so perhaps within the next ten years, we’ll see more VR games and perhaps even VR social media. Will people run home and use their VR headset every day? Maybe. Maybe not.
A lot of tech growth will be practically invisible to me or won’t impact me directly. Maybe the city’s water treatment plants get upgraded with improved water filtration and its gas lines get outfitted to shut off automatically when earthquake tremors are detected, but I’ll still get water from a tap and heat my house with natural gas. Maybe I’ll use a “smart” thermostat, but maybe I’ll stick to analog due to price, bugs, compatibility issues, security risks, or potential fire hazards.
When I set out to write Eidolon, I wasn’t interested in writing “jetpack commuting” and “domed cities.” I was interested in a more realistic future setting. I took two current technologies I find interesting—robotics and computer intelligence—and explored how their development might play out in a setting where a company was not only determined to spend a whole lot of money but where they found some success. Some things about this near-future setting affect the protagonist’s daily life, such as her use of driverless taxis, her work with VI-motivated androids, and a few internet-connected devices in her apartment, but not all tech growth would necessarily come up in the narrative, such as improved medical technologies, the shifting of demographics, and the state of drone legality. Also, older tech stuck around. You know, like it tends to do. People can still drive cars. They still use smartphones, webcams, and physical keys. That’s just an authorial decision I made based on Lily’s preferences and financial circumstances and my best efforts as an amateur futurist.
What set me on the path of a more constrained future setting was the number of times I’ve seen a movie or TV show that made the mistake of not pushing major technological and scientific advancements far enough into the future. It’s not that we don’t have the will or the intelligence. It’s that so much slows us down, from government interference (sometimes for good reason) to public opinion to limited funds and material resources. I don’t hold out much hope that we’ll stop ravaging the environment in exchange for (massive) short-term profit, but a more mindful approach to environmental protection as well as the elimination of exploitative labor practices would also slow our technological advance.
That said, I enjoy watching a technology advance faster than expected. To see creative, cheap, practical uses emerge that I never considered. To learn how a developing technology can be exploited in ways many people never imagined.
I also understand enjoying stories featuring technologies that very well may never come to pass. I enjoy those stories, too! But I can also imagine a future where we never have humanoid robots in our homes. Never achieve interstellar flight. Never cure cancer. Never have colonies on Mars. (Yes, NASA landed the Mars Rover and Elon Musk exists, but no human has stood on our own moon in over forty years.)
My vision of The Future is not necessarily your vision of The Future. Or even necessarily my only vision of The Future. The setting of Eidolon was simply one Near Future to which I decided to constrain myself. I’m also interested in 3D printers, transhumanism, and genetic research; I’m just not sure anyone but the rich will have access to most advancements in these areas, at least not in my lifetime. I’m also interested in the future of tech crime: data stolen or spoofed, infrastructure vulnerabilities exploited, finances both commercial and individual utterly destroyed. Maybe data security isn’t as “sexy” as flying cars and VR addiction, but tech crime is already in our present and will feature even more prominently in our future.
I do know that I had fun creating the near-future world of Eidolon—at least, whenever I wasn’t ten tabs deep into researching something complicated. I may even go back there someday with some new ideas sprinkled in, like package-delivering drones, 3D-printed cocks, and gimmicky-as-fuck smart mirrors for those who really love throwing their money into a hole. All I can hope is that if you read Eidolon, you’ll also end up interested in re-visiting that future Seattle.