Sue Gordon, the first deputy director of National Intelligence wakes up every day at three in the morning, leaps onto a peloton, and recounts all possibilities for the destruction of the United States through the world. In the afternoon, she usually visited the Oval Office and met with the heads of the 17 intelligence agencies to receive threat reports. The self-described "chief operating officer of the intelligence services" has many worries, but the 37-year-old veteran is generally optimistic about the future of America. Well, she says, all she needs is Silicon Valley to understand that technology and government need not be tackled.
On a recent trip to Silicon Valley, Gordon partnered with WIRED to discuss how much the Silicon Valley government needs to join the fight to protect the US. She was in town to speak at a conference in Stanford, but also to convince the tech industry leaders that government and technology share many common goals despite growing employee concerns.
"I had a meeting with Google, which offered me the offer:" We are in the same business ". And they are like "what?" And I said, "Use information forever," says Gordon.
That's a tough sell in Silicon Valley, especially in the years after Snowden. After Snowden's leaks, tech companies and tech workers did not want to be seen as complicity in a government that spied on their own people ̵
Gordon agrees and supports a broader awareness of the technology that can be abused, but came to Silicon Valley to explain why government and technology should solve these problems hand in hand.
Gordon knows public-private partnerships. The CIA's venture capital accelerator, In-Q-Tel, which has invested in everything from malware detection software to biochemical sensors and microbatteries for almost 20 years, was Gordon's idea. In-Q-Tel is a breakthrough approach that directly finances start-ups that may be of interest to national security, with no restrictions on the use of this money and ownership of intellectual property. Among other successful investments, In-Q-Tel supported a company called Keyhole, which Google would then purchase and convert to Google Earth.
"You do not become lawless just because you have technology."
Principle Deputy DNI Sue Gordon
Well, says Gordon, the time has come for a new partnership with the intelligence services and Silicon Valley. Artificial intelligence is a great opportunity for the government and the private sector, but the danger of it being abused, biased, or used by foreign opponents is so real that government and technology companies should work together to secure it. 19659003] Some technology experts agree with this statement openly. Bezos said to WIRED 25 last month, "If big technology companies turn their backs on the US Department of Defense, that country will be in big trouble." Much of the base is the idea of cooperating with the government in war matters, unpleasant or hostile.
In particular, Google has had a rocky relationship recently. In June, pressure from its own group prompted it not to renew the contract with the Pentagon to support the development of an AI that would identify drone targets. Gordon expressed his dismay over the decision, emphasizing that pattern recognition work is crucial to gathering information and that it is in the best interest of the country to develop the best systems to achieve this.
"I fear some people on Google I think if you work on Project Maven, which is about computer vision, an automatic device will decide to send a weapons system," she says. However, Gordon claims that a human being still makes that choice, and everything related to war is subject to the rules of engagement, be it a human identifying a target or a machine alerting that person to a potential target makes. "You do not become lawless just because you have technology," says Gordon. "We are a nation of laws."
The risks of AI and the potential for abuse are a top priority for technologists, policymakers, and ethicists. Just this week, Microsoft's President Brad Smith repeated his repeated demand for regulation of facial recognition technology "before the year 2024 resembles 1984 ." Tech workers had objections to their companies – including Microsoft – who co-operated with Microsoft's government. They said they do not want their technology to be used by the government until and unless laws are specifically designed to prevent the technology from being abused. Case study:
an in-house meeting recently at Amazon where employees raised fears about the company's face recognition in Immigration and Customs Enforcement.
Gordon agrees with the risks, but believes that replacing collaboration is the wrong way to fix it. "There are so many bad things that can happen when you rely on algorithms to make decisions for you," she says, noting that the government is strongly suggesting that the AI be auditable and secure can be done sector. "If we use AI / ML to go through many images, such as suspected terrorists, and if an adversary would change that algorithm to the wrong conclusion, you would see that that would be bad." Gordon says, "The AI has something cool to add, because we look very closely at the threats we're facing, and it's something we both need It will be useful to us in terms of national security, but it will also be beneficial to every aspect of American life, "she says, whether they are self-driving cars or algorithms that guide medical care Believes that the AI needs to be developed from the ground up responsibly, arguing that the private sector and the government are obliged to be united in what they call "shared creation" Brain Trust
Beyond Private-Public Cooperation Gordon sees a new paradigm for the exchange of talented workers between government and private sector. She denies the idea that the best engineers do not want to work for the government and says that people who want to work on important things that they know are still attracted to government jobs as they were. But, ideally, she thinks technicians would start her career in government, saying, "We have the most serious problems, and we're giving [people] more responsibilities younger," and then they're leaving the country. She wants state-trained techies to go to the private sector to bring what she's learned and what's new, and when they're ready to slow down and leave the rat race, as she calls it, they can return to government.  "There are so many bad things that can happen when you rely on algorithms to make decisions for you."
Gordon hopes that more of a revolving door would lead to less distrust and misunderstandings. "I think there are many misconceptions about those of us working in national security and intelligence," she says. "We swear to uphold and defend the United States Constitution. That is, we believe in privacy and civil liberties and swear by it. "
Silicon Valley has a long history of working with the government and using state-created technologies, a tradition that continues today. Collaborations and columns to share talent, such as the Defense Digital Service, in which engineering companies can participate, are already allowing some proponents of this mutual pollination. As the AI evolves and becomes more important to the military and intelligence services, and as Silicon Valley continues to anticipate the real uses and implications of its products, it is an open question as to whether these partnerships can continue to grow.
I think it's delightful that they now have a morality in using the technology the department has developed for them. That's cute, "she says," but we always did that together.