After defeating the British in the Revolutionary War, General George Washington almost certainly could have seized control, and made himself dictator. Washington was revered, the Continental Congress was weak, and the argument that the colonies needed the stable leadership of a tested leader in the post-war period would have been an easy one to make. But he did not.
Washington’s insistence on civilian control of the military is now a bedrock of the United States’ political system. The very thought of the military intervening in our country’s political decisions, much less overthrowing a democratically-elected Congress or president, sends a shiver down the spine of Americans, and would cut to the deepest level of what it means to be American. That idea—that elected civilians are our leaders, and the military answer to them—has held strong throughout our history.
There is no physical barrier, however, that prevents the military from intervening in the political process, or even from removing elected leaders from office. There is no wall, no defense. The military controls the weapons, and could do so if they pleased.
What has prevented it here is the norm created when Washington relinquished power to Congress. It hasn’t happened because it violates an idea of what is acceptable in our country, and one that defines our country.
That is not the only norm we depend on within our system. We have depended, too, on the idea that even if we wholly disagree with officials elected to office and want to see them unseated as soon as possible, they are afforded the respect of holding office. They were elected to office through our political system, and while we may think they should not be in office, we at least acknowledge they were elected. Similarly, elected leaders have respected a norm that they will not use their new powers to persecute the people they have replaced.
Together with the Constitution, which slows down and impedes the ability of majorities to enact sweeping changes to our laws, these norms (more numerous than the three discussed above) limit the scope of change possible for a single election. One general election year will not mean that the previous administration will be thrown in prison and minority groups’ freedom of speech will be denied. It will not mean that the economy will be nationalized. It will not mean that all members of a racial group will be rounded up and placed in internment camps.
By doing so, it turns down the temperature on our political debate. When people’s rights are not being directly decided by a single election, or whether the last administration will be imprisoned, there is much less incentive for people to make drastic decisions, like for a president to refuse to transfer power to the elected candidate. Such norms help ensure stability.
I fear we are well down the path of weakening the very norms that have girded our democracy.
I am, of course, writing about Donald J. Trump, who will be the Republican Party’s nominee for president.
Trump, though, did not start this erosion. We can trace it in its current form back at least to the 2000 election, and certainly to Obama’s presidency, with the right’s courting of birther conspiracy theorists that insisted President Obama is a foreigner and thus incapable of holding office. We can lay blame on President George W. Bush for expanding the scope of executive power, legitimizing torture, and on President Obama for enshrining Bush’s expansion of power and expanding it further still. There is much blame to go around.
Trump is something altogether new, however. Whereas the past two presidents have undermined our norms at the edges while still paying respect to them (the role of the executive in our system, respect for the rights of all Americans, and the legitimacy of our political system itself), Trump has undermined our system’s norms whenever he has found it politically advantageous to do so.
His commitment to the protections enshrined in U.S. constitution are questionable, at best, and if we assume the worst, downright frightening (the difficulty with Trump is that he’s not precise with words, so it’s sometimes hard to make sense of what he’s saying). He has expressed support for registering Muslims in a database, elaborating that they could “sign up at different places.” When a reporter asked how this was different from requiring Jews to register in Nazi Germany, Trump said “you tell me,” prompting The Atlantic’s David Graham to note that “it’s hard to remember a time when a supposedly mainstream candidate had no interest in differentiating ideas he’s endorsed from those of the Nazis.” Trump, for good measure, has also refused to disavow President Franklin D. Roosevelt’s internment of Japanese-Americans.
That is not even close to an exhaustive list, and Trump has added to it since, by stating that Gonzalo Curiel, a federal judge presiding over a lawsuit he is involved in, should Curiel should recuse himself from the case for impartiality because Curiel is of Mexican descent.
By doing so, Trump is using his position as a candidate for president to threaten a sitting judge, and is undermining the legitimacy of the judiciary. When a candidate for president uses his position to question a judge’s impartiality, the judiciary’s stature is weakened. What good are court rulings if the president states rulings that run counter to their interests are biased and illegitimate? Through his statements, Trump lessens the standing of the judiciary, and raises the specter of ignoring rulings altogether if he is elected. After all, why should the president respect “biased” and illegitimate rulings from an unelected body of judges?
Trump, too, is fond of threatening people he finds disagreeable. He has threatened the Ricketts family and David French’s family with consequences if they do not fall in line, and has used lawsuits as a bludgeon against people in the past. Those threats appear to be part of who Trump is and what he believes a good leader to be. He is, after all, the man that complimented the Chinese Communist Party’s strength for putting down the 1989 Tiananmen democracy protest with tanks and bullets, and the man that said he would compel the U.S. military to carry out unlawful orders, even if they refused.
Is that our norm of what the executive—the body of government that signs and enforces laws drafted by the democratically-elected legislature—is? Someone that questions the impartiality of a federal judge because of the judge, and uses his race as an excuse? Someone that doesn’t recoil at the idea of placing American citizens of one religion in a database so they can be tracked by the federal government? Someone that finds murdering the wives and children of terrorists as an intentional strategy morally acceptable, and believes it is “leadership” to force the military to carry out such atrocities? Someone that thinks it is not beneath a president to threaten private citizens for crossing him?
Those are not the norms we have established, or the norms that have provided remarkable stability in our political system since our founding. They are the signs of someone that fancies himself an authoritarian, and of a person that believes anything, or anyone, that stands in his way are to be crushed. They are the marks of a demagogue willing to do anything in the pursuit of power.
Trump will likely not be elected president. Despite that, by allowing this man to be the nominee for president for the Republican party, by allowing him to say and do the things he does, we are doing damage to our system of government. We are normalizing Trump’s behavior, normalizing his blatant use of racism and threats. He is raising the specter that things we did not think people would ever do, could be done as a result of a single election.
Trump will not be the end of our system, even if elected. But he is accelerating the decline of what has helped make our form of government so strong and resilient. And for that, we—members of the party that has elevated this man to be our nominee—should be deeply ashamed.
There is no honor in sticking by a party that makes Trump our standard bearer, no good to come from party unity.
The United States is a country founded on ideas. Ethnicity and religion are not what have bonded us from our founding. It is the fundamental ideas expressed in our Declaration of Independence, and in our fight for independence, that run through our country’s history. Our founding set forth that individuals are ends unto themselves, and deserve to be respected as such; that government’s role is not to be the ultimate source of authority and power within society, but merely to protect the people’s pre-existing rights; and that through our will and determination, there is no limit to what we can accomplish.
We have not always honored and lived up to those ideas. Our founding itself was stained with the deepest of shames, the enslavement of human beings, while our founders argued for the dawn of a new beginning. We subjugated the Indians, and cruelly abused them like non-humans. We let the cancer of slavery metastasize, until war was the only option remaining; and after slavery was broken, we allowed Jim Crow to replace it. We have not yet entirely grappled with what our country’s greatest shame means, nor have we left the effects of slavery to the pages of history. They remain here with us today.
And yet America is a tremendous miracle. From British colonialism and abuse, we won our independence as a country, and forged one of the greatest works of humanity: the Constitution. The Constitution not only explicitly laid out the extent of the federal government’s powers, and enumerated the rights of the people that must not be infringed, but created a political system that, through separation of powers and the pitting of different power centers against each other, limited the ability of the government to fall under dominance of a single group and single passion of the time, to limit the ability of the government to be used as a tool of repression, even if it represented the will of the majority. It is a marvel of all time.
Through our unique genesis, we forged an identity separate from ethnicity and religion. Our identity, what it is to be American, centers around our belief in respect for each other as individuals, and for our right to pursue our dreams. By doing so, our country has been able to adopt waves of immigrants, people utterly different from the people already here, and integrate them into our nation. Whatever our race, religion and culture, if we share the same fundamental ideas, we are one people. Our identity is our ideas.
We have not always lived up to that, either. But it is remarkable how many different peoples have immigrated to the United States since our founding, and in the ensuing decades became as “American” as anyone else. That is the strength of our country: We will take anyone, if they believe there is a better tomorrow through work. We can all have different skin colors, follow a different religion (or none at all), eat different food, have differing ideas for what the good life is, even speak different languages—and be unified as a single people. That is a miracle, and despite not always living up to it, it also aptly captures something fundamental to our country.
Our country, at its best, is not about “staying with our own kind,” or taking from others to increase the lot of “our people.” Our country is about being different, having different ideas—but being on the whole unified under an assumption that we can create a better tomorrow for everyone through work.
That is also why I have found Donald Trump’s campaign for president so disturbing. Trump has built his campaign—to “make America great again”—on the belief that America is lost, that we are an embarrassment, that we are weak, and that we can only return to “greatness” on the back of a great leader. Trump has made his appeal not by arguing for how we can empower all of us, as Americans, to pursue our dreams for a better tomorrow, but by appealing to the ethnic and religious differences between Americans. He has not just argued that open immigration could be harmful and we should be cognizant of it, but that Mexicans are rapists, drug dealers and killers. He has not just pushed for being mindful of the threat posed by Islamic terrorism, but has flirted with the idea of registering all Muslim Americans in a database so they can be tracked, and with barring Muslim Americans traveling abroad from returning to their own country. He is a man that has played on conspiracy theory and overt racism.
Trump has praised the “strength” of repressive dictators such as Vladimir Putin and repressive governments such as the People’s Republic of China, and has said—often on the same day he threatened an individual or company with consequences if he is elected—that he would open up libel laws so journalists could be sued for writing or saying what he finds to be misleading or false.
Trump claims he is conservative. What I see is a man that, in order to rise to the top, willfully pulls on the ethnic and religious differences in our country, and uses and amplifies prejudice and hatred, to garner the support of whites. He is intentionally dividing us as a nation, pitting white Christians against Hispanics and Muslims, regular people against the wealthy and “media elite,” “Americans” (by which he means white people) against foreigners, which includes not only foreign nations, but American citizens that have descended from immigrants of foreign nations. Trump is tearing at the very fabric of our nation.
He tears at it, while also undermining the bedrock idea that the government does not lead our nation, but that the individuals do. Ideology may not be fundamental to Trump, but a belief in the supremacy of great leaders, and in their necessity for a country to do great things, is. That belief underlies his fondness for Putin, a man unafraid of using the power of the state toward his ends, and to crush his opposition. It underlies his praise for the PRC in 1989, when the PRC crushed a budding protest movement in Tiananmen Square in Beijing. And it underlies his support for the use of torture and for killing the families of terrorists—great leaders do what is necessary to win.
Trump, then, is a man willing to divide us as a people, so that he can lead us to “greatness.” Trump’s idea of leadership is not to respect the limits of the federal government’s power, and the presidency’s power, but to do whatever he thinks is necessary (laws, morals, and individual rights be damned) to show our strength and impose his will, both on the world and at home. Trump does not see himself as the leader of a country defined by its rights, but as someone smarter and stronger than everyone else, and thus entitled to impose his will on whomever he pleases. There is a reason that “little,” “loser,” “low-energy,” and “weak” are some of his most-used insults for his opponents, and he speaks so often of being a “winner.”
I cannot support Trump because he is fundamentally destructive of what our country is. Trump is willfully tearing at what holds our country together and what defines us as a people. I cannot, and will not, support a man that appeals to our fears, to our baser instincts, that turns every issue into one of us versus them, and that peddles in conspiracy and racism. I cannot, and will not, support a man that fancies himself an authoritarian, a man that threatens people that say things he doesn’t like, and threatens to undermine the first amendment. I cannot, and I will not.
I will not support Donald Trump if he is the Republican Party’s nominee for president. If the GOP is remade in his image, I will leave the party. I owe the party no obligation, if the party has become destructive of what I cherish most. I cannot, and I will not.
I promise that I will fight Trump, the demagogue, now, and if he wins the nomination. I will not accept it, and nor should you.
If, like me, you are a Republican, I appeal to you to vote in your state’s primary, and to vote against Donald Trump. He has not won yet, and we can still fight. Let us defeat him. Let us win a victory for what we love about our country.
Earlier this month, Apple introduced iOS 9 with new search and Siri features. In iOS 9, users will be able to search for both specific activities and content that is contained in iOS applications. What this means is that users should be able to search for “sunburn” to find information on how to treat sunburns that is provided by iOS medical applications, and tapping the item will immediately launch it in the application. Additionally, these features will allow users to reference whatever they are looking at in a reminder they create through Siri. That is, when looking at an Amazon product page, users will be able to tell Siri “Remind me to buy this tonight,” and Siri will add a reminder with the link included.
Prior to iOS 8, an application’s functionality and content were indivisible from the application itself. If a user was looking at a photo in their photo library and wanted to edit it using a more powerful editing application they had installed, they had to leave Photos, open the editing application, and find and open the photo again there. If the user needed to access a text document they had stored in Pages, they had to launch Pages. In iOS 8, Apple eliminated the redundancy in the former example through extensions, which allow developers to atomize their application’s functionality and allow users to utilize it outside the scope of the application itself.1
The latter example is still true within iOS 8. Content is indivisible from the application itself. iOS 9, however, begins to break content and tasks from the application by making them searchable through what used to be called Spotlight on iOS but is now just Search.
The features around Search and Siri Reminders are absolutely useful. It is flexible and convenient to be able to move over to the resurrected Search page on the home screen and type in, say, “Denver” to find my flight or Airbnb reservation. What I find more interesting than the user-facing features here, though, are the tools provided to developers to make this possible, and the direction task and content search indicate iOS may be heading.
To allow iOS’s new Search feature to surface tasks and content that are contained within applications, developers must indicate to the system what within their application is content that should be surfaced, and what type of content it is (image, audio, event, etc). Developers do much the same thing for tasks. Somewhat similarly, extensions indicate to the system what kind of content they can consume.
This is referred to as “deep linking,” because it allows users to follow a “link” to somewhere deep within an application for some kind of task or content, exactly like clicking on a link in Google to a news article within a website, as opposed to going to the website’s home page and moving through their hierarchy to the article. “Deep linking,” while apt, is also somewhat misleading because this allows much more than just search. When developers update their applications to take advantage of Apple’s new APIs for identifying content and tasks to the system, they will be helping the system structure what–and what kind–of data is on the user’s device. The system will know what content is on a user’s device, what kind of content that is, and what kind of content applications provide. The system will know what photos, contacts, events (say, hotel reservations), and music are on a user’s device.
Using these tools, we could begin to construct an understanding of what the user is doing. Applications are indicating to the system what tasks the user is doing (editing a text document, browsing a web page, reading a book), as well as what kind of content it is they are interacting with. From this, we can make inferences about what the user’s intent is. If the user is reading a movie review in the New York Times application, they may want to see show times for that movie at a local theater. If the user is a student writing an essay about the Ming dynasty in China, they may want access to books they have read on the topic, or other directly relevant sources (and you can imagine such a tool being even more granular than being related to “the Ming dynasty”). Apple is clearly moving in this direction in iOS 9 through what it is calling “Proactive,” which notifies you when it is time to leave for an appointment, but there is the possibility of doing much more, and doing it across all applications on iOS.
Additionally, extensions could be the embryonic stage of application functions broken out from the application and user interface shell, one-purpose utilities that can take in some kind of content, transform it, and provide something else. A Yelp “extension” (herein I will call them “utilities” to distinguish between what an extension currently is and what I believe it could evolve into) could, for example, take in a location and food keywords, and provide back highly rated restaurants associated with the food keywords. A Fandango extension could similarly provide movie show times, or even allow the purchase of movie tickets. A Wikipedia extension could provide background information on any subject. And on and on.
In a remarkable piece titled Magic Ink, Bret Victor describes what he calls the “Information Ecosystem.” Victor describes a platform where applications (what he calls “views”) indicate to the system some topic of interest from the user, and utilities (what he calls “translators”) take in some kind of content and transform it into something else. What this platform would do is then provide inputs to all applications and translators. The platform would provide some topic of interest that has been inferred from the user; as I described above, this may be a text document where the user is writing about the Ming dynasty, or a movie review the user is reading through a web browser. Applications and translators can then consume these topics of interest and information provided by utilities. The Fandango utility I describe above could consume the movie review’s keywords, for example, and provide back to the platform movie show times in the area. The Wikipedia utility could consume the text document, and provide back information on the Ming dynasty.
What is important here is that the user intent that can be inferred from what the user is doing and what specific content they are working with, and the utilities described above, could be chained together and utilized by separate applications for the user, in such a way that was not explicitly designed beforehand. Continuing the movie review case, while the user is reading a review for Inside Out in the New York Times application, they could invoke Fandango to find local show times and purchase tickets. This could occur either by opening the Fandango application, which would immediately display the relevant show times, or through Siri (“When is this playing?”). More interesting, one could imagine a new kind of topical research application that, upon notice that the user is writing an essay related to the Ming dynasty, pulls up any number of relevant sources, from Wikipedia (using the Wikipedia utility) and online sources (papers, websites). Perhaps the user has read several books about the Ming dynasty within iBooks, and has highlighted them and added notes. If iBooks identifies that information to the system, such a research application could even bring up not just the books, but specific sections relevant to what they are writing, and passages they highlighted or left notes on. Through the platform Victor describes, the research application could do so without being explicitly designed to interface with iBooks. As a result, the work the user has done in one application can flow into another application in a new form and for a new purpose.
To further illustrate what this may allow, I am going to stretch the above research application example. Imagine that a student is writing an essay on global warming in Pages on the iPad in the left split-view, and has the research application open on the right. As the user is writing, the text will be fed into a topic processor, and “global warming” will be identified as a topic of interest by iOS. Because earlier that week they had added a number of useful articles and papers to Instapaper from Safari, Instapaper will see “global warming” as a topic of interest, and serve up to the system all articles and papers related to the topic. Then, a science data utility the user had installed at the beginning of the semester would also take in “global warming” as a topic, and would offer data on the change in global temperature since the Industrial Revolution. The research application, open on the right side of the screen, will see the articles and papers brought forward by Instapaper and the temperature data provided by the science data utility, and make them immediately available. The application could group the papers and articles together as appropriate, and show some kind of preview of the temperature data, which could then be opened into a charting application (say, Numbers) to create a chart of the rise in temperatures to put in the essay. And the research application could adjust what it provides as the user writes, without them doing anything at all.
What we would have is the ability to do research in disparate applications, and have a third application organize our research for the user in a relevant manner. Incredibly, that application could also provide access to relevant historical data for the user as well. All of this would be done without the need for this application to build in the ability to search the web and academic papers for certain topics (although it could, of course). Rather, the application is free to focus on organizing research in a meaningful and useful way in response to what the user is doing, and they would just need to do so by designing for content types, not very specific data formats coming from very specific sources.
Utilities, too, would not necessarily need to be installed with a traditional application, or “installed” at all. Because they are face-less functions, they could be listed and installed separate from applications themselves, and Apple could integrate certain utilities into the operating system to provide system-wide functionality without any work on the user’s part. For example, utilities could be used in the same way that Apple currently integrates Fandango for movie times and Yelp for restaurant data and reviews. Siri would obviously be a beneficiary of this, but all applications could be smarter and more powerful as a result.
Apple hasn’t built the Information Ecosystem in iOS 9. While iOS 9′s new search APIs allow developers to identify what type of content something is, we do not yet have more sophisticated types (like book notes and highlights), nor a system for declaring new types in a general way that all applications can see (like a “movie show times” type).2 Such a system will be integral to realizing what Victor describes, and is by no means a trivial problem. But the component parts are increasingly coming into existence. I don’t know if that is the direction Apple is heading, but it certainly *could be*, based on the last few years of iOS releases. What is clear, though, is Apple is intent on trying to infer more about what the user is doing and their intent, and provide useful features using it. iOS 7 began passively remembering frequently-visited locations and indicated how long it would take to get to, say, the user’s office in the morning. iOS 9 builds on that sort of concept by notifying the user when they need to leave for an appointment to get there on time, and by automatically starting a certain playlist the user likes when they get in the car. Small steps, but the direction of those steps is obvious.
I hope Apple is putting the blocks in place to build something like the Information Ecosystem. Building the Information Ecosystem would go a long way to expanding the power of computing by breaking applications–discrete and indivisible units of data and function–into their component parts, freeing that data to flow into other parts of the system and to capture user intent, and for the functionality to augment other areas in unexpected ways.
I believe that the Information Ecosystem ought to be the future of computing. I hope Apple is putting the blocks in place to build something like it.
Jupiter beckons in the distance, a small light, the greatest planet of all
I stare through the window, timeless, as the light slowly grows larger
I wonder what it will be like to see it with my own eyes
Swirls of orange and red and brown, a globe so large I can’t comprehend
The Jovian moons circling around the greatest planet of all, enraptured,
It is growing larger through the window
Through the window that separates me from the void,
Separates warmth and air and life from emptiness and death
This is what we have constructed
To ferry us across the great emptiness of space
It is larger still, I see color!
To see it with our own eyes
I see the moons!
To see if there is life beyond our little blue dot, so far away
To strike off into the unknown once again
To extend humanity beyond our home
I see it, I see it! I see!
But oh, this is the dream of a child
A great dream, but a dream
Remembered by an old man,
What could have been
The phone dominates your attention. For nearly every use, the phone has your undivided attention. Browsing the web, Twitter, Instagram, Snapchat, watching video, reading, messaging—all require focus on a screen that fills your vision, your primary attention, and generally some kind of interaction. Everything else, too, is always a home button or notification-tap away at all times.
Is that a shock when the phone is the single gateway to nearly everything? The PC is now for doing work, but the phone is for messaging, taking photos, sharing them, the web, Twitter, Facebook, finding places to go, getting directions there, and even making calls.
That is the reason we find ourselves, when we receive a message and pull out our phones to respond, often descending into a muscle memory check of our other iMessages, emails and Twitter stream. We pull out our phone for one purpose, like responding to a message or checking our schedule, and end up spending several mindless minutes (or, if I am honest, more than “several minutes”) checking in on whatever it is. We find ourselves doing this even when we shouldn’t. We do it while seeing friends and family, while out to dinner with them, while at home with family when we should be spending time with them or doing other things.
I used “we” above because I think anyone with a smartphone, or anyone who knows people with them, can find truth in it to a greater or lesser extent.
My concern with wrist-worn “smartwatches,” starting with the Pebble, is that they appear to primarily exist to push notifications that we receive on our phone to our wrist. They seem to exist to make dealing with phone calls, messages, updates easier; seeing them, ignoring them, replying to them. They are there to make dealing with our phones more convenient. And in large part, that is how smartwatches have been designed and used. “It’s there so I don’t have to pull my phone out of my pocket.”
But that idea of what smartwatches are for, making it more convenient to deal with the flood of notifications and information our phones provide us, is unimaginative. I think what the smartwatch can do is make the phone unnecessary for many purposes, create new purposes altogether, and allow us to benefit from a wrist-sized screen’s limitations.
On September 9th, Apple introduced their long-awaited watch, appropriately named the Apple Watch (from herein “the Watch”). We won’t be able to fully understand what Apple’s built until next year, but they did provide a fairly detailed look at the Watch and the software it runs.
It appears that, in contrast to Google’s approach with Google Wear (which is heavily focused on showing single bits of information or points of interaction on the screen, and relies on swiping between cards of data and interaction), Apple intends the Watch to run fairly sophisticated applications. The Watch retains the iPhone’s touch interface, but Apple has designed new means of interaction specific to a small screen. In addition to the tap, the Watch brings the “force tap,” which is used to bring up different options within applications (like, say, the shuffle and AirPlay buttons within the music application), and the “digital crown,” a repurposing of the normal watch’s crown into a sort of scroll wheel for the Watch. Using the digital crown, users can zoom in and out of maps and scroll through lists with precision and without covering the small screen. And, most interestingly, they have replaced the familiar vibration alert in our phones with a light “tap” from the Watch to notify the user.
What this allows is fairly sophisticated applications. You can not only search for locations around you, but you can zoom in and out of maps. You can scroll through your emails, messages, events or music. You can control your Apple TV.
This subsumes many of the reasons we pull out our phones during the day. We can check our schedule for the day, check a message when it’s received and send a quick reply, find a place to get a drink after dinner (and get directions there without having to walk and stare at your phone), ignore a phone call by placing your hand over your wrist, or put something on the Apple TV.
But what force taps and the digital crown will not do is make the Watch’s small screen as large as a phone’s. You can’t type out a reply to a message or email. You can’t browse the web for something. You can’t dig through a few months of your email to find a certain one. You can’t mindlessly swipe through Twitter (well, you could, but it’s going to be pretty difficult). That, though, is an advantage the Watch has over the phone. Because it is inherently limited, it also has to be laser-focused on a single purpose, and while using it, you are limited to accomplishing something. It’s a lot harder to lose yourself in a 1.5″ screen than it is in a 4+ inch screen.
That’s going to be one of the Watch’s primary purposes for existing: allowing us to do many of the things we do on our phones right now, but in a way that’s limited and, thus, less distracting. If you’re out to dinner and receive a message (and haven’t turned on Do Not Disturb), you’re going to be a lot less likely to spend a couple minutes on a reply, and then Instagram, if you’re checking and responding it to it on the Watch. It just doesn’t work that way.
In that way, I think Apple has embraced the wrist-worn watch’s inherent limitations. Rather than try to work around them, they are using them. They’ve built new means of interaction (force tap, digital crown, “taptic” feedback) that allows fairly sophisticated applications, but they didn’t use them to cram iOS in its entirety into the Watch.
What I think Apple is trying to do is build a new mode of personal computing on the wrist that is molded from the inherent limitations and opportunities that creates.
In Jony Ive’s introduction to the Watch, Ive ends with a statement of purpose of sorts for it. He says,
I think we are now at a compelling beginning, actually designing technology to be worn. To be truly personal.
That sounds like a platitude, but I think it defines what Apple is trying to do. “Taptic feedback,” which Dave Hamilton describes as feeling like someone tapping you on the wrist, is a much less intrusive and jolting way of getting a notification than a vibration against your leg or the terrible noise it makes on a table, and more generally, focusing the Watch’s use on quick single purposes is, too.
What is interesting to me, though, is they are using the Watch’s nature to do things in a more personal—human—way, and to do things that the phone can’t. When providing directions, the Watch shows them on the screen just as you would expect on a phone, but it also does something neat: when it’s time to turn, it will let you know using its Taptic feedback, and it differentiates between left and right. As a result, there isn’t a need to stare at your phone while walking somewhere and getting directions.
They’ve also created a new kind of messaging. Traditionally, “messages” are either words sent from one person to another using text or speech. Since messages are communication through word, something inherently mental or intellectual rather than emotional, they are divorced from emotion. We can try to communicate emotion through text or speech (emoticons serve exactly that purpose), but communicating emotion to another person is always translated into text or speech, and then thought about by them, rather than felt. In person, we can communicate emotion with our facial expressions, body gestures, and through touch. There’s a reason hugging your partner before they leave on a long trip is so much more powerful than a text message saying you’ll miss them.
In a small way, using the Watch, Apple is trying to create a new way to communicate that can capture some of that emotion. Because the Watch can effectively “tap” your wrist, others can tap out a pattern on their Watch, and it will re-create those taps on your wrist, almost like they are tapping you themselves. You could send a tap-tap to your partner’s wrist while they are away on a trip just to say that you’re thinking about them. Isn’t that so much more meaningful a way to say it than a text message saying it? Doesn’t it carry more emotion and resonance?
That’s what they mean by making technology more personal. It means making it more human.
The Watch is not about making it more convenient to deal with notifications and information sent to us. It’s not even about, as I described above, keeping your phone in your pocket more often (although that will be a result). The Watch is creating a new kind of computing of our wrists that will be for different purposes than what the phone is for and what the tablet and PC are for. The Watch is for quickly checking and responding to messages, checking your schedule, finding somewhere to go and getting directions there, for helping you lead a more active (healthier) life, and for a more meaningful form of communication. And it will do that without sucking our complete attention onto it, like the phone, tablet and PC do.
The Watch is for doing things with the world and people around us. Finding places to go, getting there, exercising, checking in at the airport, and sending more meaningful messages. Even notifying you of a new message (if you don’t have Do Not Disturb turned on) while out to dinner with family or friends serves this purpose, because if you have to see it, you can do so in a less disruptive way and get back to what you are doing—spending time with people important to you.
The Watch is a new kind of computing born of, and made better by, it’s limitations. And I can’t wait.
When I was growing up, I was fascinated by space. One of my earliest memories—and I know this is strange—is, when I was four or five years old, trying to grasp the concept of emptiness in space. I imagined the vast emptiness of space between galaxies, nothing but emptiness. I tried to imagine what that meant, but most of all, I tried to imagine what it would look like.
That question, what color empty space would be, rolled around my brain the most. I couldn’t shake it. I would be doing something–playing Nintendo, coloring, whatever–and that question would pop into my head again. What does “nothing” look like? First, I imagined that it would look black, the black of being deep in a forest at night. But that didn’t seem right, either; black is still “something.” And then, I remember, I realized I was thinking about a much worse question. I wasn’t trying to imagine what the emptiness of space would look like. I was trying to imagine what nothing would look like.
I have that memory, I think, because thinking about that sort of broke my brain. I couldn’t comprehend what nothing is.
That question, of course, begins down toward the central question of what our universe is and how it was created. I think that’s why space–the planets, stars, galaxies–so fascinated me then; it’s this thing so alien to our world, that dwarfs it on a scale that’s incomprehensible to us, and yet it is us. We aren’t something held apart separate from it, but intimately a part of it and its history.
Trying to understand the physics of our universe, its structure and history is also an attempt to understand ourselves. I think, at some gut level, I understood that as a kid.
I poured myself into learning about our solar system and galaxy. My parents’ Windows PC had Encarta installed, and I was enthralled. I spent countless hours reading everything I could find within Encarta (which, at the time, felt like a truly magical fount of knowledge) about Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. And when I exhausted that source, I asked for books about space, and I obsessed over them. They were windows into these incredible places, and I couldn’t believe that we were a part of such a wondrous universe.
Through elementary school, my love for space continued to blossom. Then, NASA were my heroes. To my eyes, they were the people designing and launching missions across our solar system so we could understand even more about it. Many of the photos of Jupiter, Saturn, Uranus, and Neptune that I was so enraptured by were taken by spacecraft designed, built and launched by people at NASA. They were the people who had risked their lives to leave Earth and go to the Moon, to do something that most people up until just decades prior couldn’t even imagine as being possible. And they were the people who were exploring Mars with a little robotic rover called Sojourner that very moment.
They were my heroes because they were the people pushing us to explore our solar system, to learn what was out there and what came before us. I felt like I was at living during a momentous time in the history of humanity, and that I would live to see advances as incredible as 1969′s Moon landing. There wasn’t a doubt in my mind.
That year, in 1997, I was nine years old. It’s been seventeen years.
Since then, we have indeed made great advances. In that time, we’ve sent three separate rovers to Mars, and we discovered that Mars certainly had liquid water on its surface long ago in its history. We landed a probe on the surface of Saturn’s moon Titan, which sent back these photos. We’ve discovered that our galaxy is teeming with solar systems.
All truly great things. But we are no closer today to landing humans on Mars than we were in 1997. In fact, we are no closer to putting humans back on the Moon today than we were in 1997.
Some people would argue that’s nothing to be sad about, because there isn’t anything to be gained by sending humans to Mars, or anywhere else. Sending humans outside Earth is incredibly expensive and offers us nothing that can’t be gained through robotic exploration.
Humanity has many urges, but our grandest and noblest is our constant curiosity. Through our history as a species, we have wondered what is over that hill, over that ridge, beyond the horizon, and when we sat around our fires, what are the lights we see in the sky. Throughout, someone has wondered, and because they wondered, they wandered beyond the border that marks where our knowledge of the world ends, and they wandered into the unknown. We never crossed mountains, deserts, plains, continents and oceans because we did a return-on-investment analysis and decided there were economic benefits beyond the cost to doing so. We did so because we had to in order to survive, and we did so because we had to know what was there. We were curious, so we stepped out of what we knew into certain danger.
And yet that tendency of ours to risk everything to learn what is beyond everything we know is also integral to all of the progress we have made as a species. While working on rockets capable of leaving Earth’s atmosphere, it would hardly be obvious what that would allow us to do. Would someone then have known that rocketry would allow us to place satellites into orbit which would allow worldwide communication, weather prediction and the ability to locate yourself to within a few feet anywhere on Earth? Economic benefits that result from progress are hardly ever obvious beforehand.
But it is more than that. It isn’t just that exploration drives concrete economic benefits. We think in narratives. Since the Enlightenment and industrial revolution, we have built a narrative of progress. With each year that passes, we feel that things improve. Our computers get faster, smaller, more capable; we develop new drugs and treatments for diseases and conditions that, before, would be crippling or a death sentence; with each year, our lives improve. For a century and a half or so, that feeling hasn’t been too far from reality. But most especially, we have continued to do something that cuts to the very center of what it means to be human: we have explored. We explored the most dangerous parts of Earth, we have explored our oceans, we put humans into space and humans stepped foot on a foreign body. There is a reason that, when we think of our greatest achievements as a species, landing on the Moon comes to mind with ease. At a very deep level within us, exploring the unknown is tied up with what it means to progress.
As exciting and useful as it is to send probes to other planets and moons, it fails to capture our imagination in the same way that sending people does. The reason is because doing so–exploring the unknown ourselves–is such an incredible risk. What Buzz Aldrin, Neil Armstrong and Michael Collins did in 1969 was unfathomably dangerous. They knew–everyone knew–that there was a very good chance that they would fail to get back to Earth. But they accepted that risk, because for them, learning about the unknown was worth that risk.
Abandoning human exploration of space, then, has consequences more far reaching than what its proponents intend. We would not just be abandoning putting humans into space, but at some fundamental level within us will be resigning ourselves to staying here. We will have decided, as a species, that we have gone far enough, we will leave our borders at our planet’s atmosphere, and leave the rest of the solar system and galaxy to nature. And with that decision, we will resign ourselves to no longer exploring in the general sense.
That’s why it is so integral that we continue exploring. Pushing on the edge of what’s possible is what fuels our desire and ability to explore in all other areas, too.
There are still incredible mysteries for us to unlock. We don’t know whether Mars had life early in its history. We don’t know whether, in Europa’s and Enceladus’s oceans, there are lifeforms swimming through them as I write this. We don’t know whether there is intelligent life living on planets in solar systems in the Milky Way and beyond. We don’t know how life began on Earth, let alone how life began at all. And most of all, we don’t know whether it is possible for us to move beyond our own solar system.
But what I do know is this: I want to know. I want to know.
Monday’s WWDC Keynote was easily the largest set of changes made to Apple’s platforms since iOS 2 was announced in 2008. The effects of what was announced will be felt and discussed for years to come.
There is a lot to think through and write about, which I will be doing in the coming weeks. However, something struck me during the keynote that felt fairly small but, upon thinking about it afterward, I think could end up being important to Apple’s future success.
Apple announced further updates to their cloud service where you can save all of the photos and videos you take, all of your documents and all of your data. Apple announced that their Touch ID feature, which identifies you using your fingerprint, will now be accessible by third-party developers as well. And Apple announced that a new app and framework for centralizing all of your health and fitness data, which—given your permission—can automatically be sent to your doctor.
That’s in addition to storing your contacts, calendar and reminders, and tracking your location (and keeping that data on your device) over time so your iPhone can provide you with timely updates on how long it will take to get to home or work with current traffic. Combined, Apple is asking you to store nearly all of your intimate information on their devices and servers, and even to provide the most intimate—your health data—to your doctor.
And yet I’ve heard little or no consternation over Apple’s consolidating our most private data, in an era where our government maintains call logs, collects security and encryption exploits, breaks into private services to collect data, and lied to the public about the extent of what they are doing.
That should be surprising, especially considering how much push-back companies like Google and Facebook have received for collecting and using our personal data. On the whole, people seem to trust Apple to respect their personal data.
The reason, I think, starts with that Apple’s business is *not* their users’ data. Their business is selling devices and services to their users. As a result, Apple’s interest in their users’ data is not to generate revenue (which is inherently Google and Facebook’s interest), but rather to use it in such a way that they can create compelling and meaningful products for their customers. Their incentives are aligned with user incentives because of their business model.
Second, Apple takes this relationship very seriously. iOS makes it very clear when applications are requesting access to our personal data. Apple has worked quite hard to make sure that the *user* decides what and how much they want to share.
I don’t think Google or Facebook could announce that they are going to collect their users’ health data and optionally send it to their doctors without some reasonably large amount of criticism and fear of abuse. The reason is obvious: their primary business is utilizing user data to generate revenue, so why couldn’t they do the same with health data?
As time continues, the integration of our smartphones, health tracking devices and the increasingly sophisticated use of the data they generate together will become the primary space where meaningful development occurs in technology. There’s huge potential for what Apple has announced with HealthKit. If it takes off, it will be a single place to store all of our health data. This will not only benefit doctors because they will be able to see it for the first time, but by aggregating it together for each individual (and potentially for groups), we will be able to see trends and correlations related to our decisions and health that we just could not see before.
That has the potential for both better decision-making and for doctors to get ahold of us when something appears to be seriously wrong that we ourselves may not even be aware of. There is incredible potential here, and I think Apple is the only company that can pull it off. This puts Apple in a unique position as we continue into the future and provides a special advantage that no other company has.
You have all the answers to my questions
Even ones I didn’t have
Why should I know anything at all?
You know everything I need
Everything I may need
You hold it all for me
So I waste no time
But still I wonder, why don’t I wonder?
Like I did as a kid
But no answer