Faking It

When the web was new, we were very worried about the reliability of online content. We were moving from an environment where the means of publication were controlled. There were gatekeepers who controlled what content got published. They ensured that the information the public consumed was accurate and reliable. At least, that was the idea.

With the web, that changed because everyone suddenly had the ability to publish content. Anyone could make a web page. So we had to figure out how assess the credibility of a web site. I remember, when working on my Master’s degree in the late 90s, that information literacy was just starting to become a thing.  We were worried that our students might believe everything they read online.  So we tried to teach them the look critically at information resources. That work continues now, nearly a generation later.

But things have become more difficult. With the advent of Photoshop and other image editing software, it’s pretty easy to edit pictures to enhance or omit details. Sometimes, this is done for reasons of vanity, but it’s often done for political reasons as well. So now, in addition to assessing the reliability of web sites and news stories, we have to question the legitimacy of photographs, too. It’s okay. We’re getting better at it. We’re becoming more skeptical. Hopefully, we’re asking questions and citing sources and applying deductive reasoning and the scientific method to separate fact from fiction. I mean, it’s not like we’re just throwing up our hands and saying everything we don’t like is fake, right?

But here we go, making things harder again. Last year, Adobe showed a demo of its new VoCo product. With a 20 minute sample of a speaker’s voice, you can quickly and easily edit the audio and make the speaker say anything you want.

This isn’t out yet, but it’s coming in a future version of Adobe Creative Cloud, a widely used graphic arts package that includes Photoshop, InDesign, and other “standard” tools used by professionals and amateurs alike to edit digital work.

So now, you can take an audio recording and edit it as easily as a word processing document to make the speaker say anything you want. That’s really cool, but also terrifying. But wait, there’s more. Check out this research project at Stanford:

See what they’re doing there? Using nothing more complicated than a webcam, they’re mapping facial features onto an existing video. If you pair these two technologies together, you can create a video that makes any public figure say anything you want.

Sure, it’s not perfect. This is still complicated software. It’s cumbersome to use, especially when you’re trying to put all the pieces together. And the results aren’t great. You can tell from this video that the technology is not quite at the point where it’s going to fool most people.

But our job just got harder. On one level, it not too bad that we have to teach our students to think critically about video and audio. We really should have seen that coming. And we’re teaching students to think critically about information, regardless of the form. They just need to be aware that video and audio, like pictures and text, can be manipulated. Information has meta information. HOW do you know? What is the source for the position you’re taking? Why do you trust that source? We need to challenge our students and each other to make the information about the information just as important as the content itself.

But the real problem is the plausible deniability. We can no longer prove, beyond a shadow of a doubt, that someone said something or did something. You have video of me holding up a convenience store? Prove that it’s me and that it hasn’t been altered. You claim you have an audio recording of a public figure making misogynist / racist / anti-semitic / anti-American comments? Prove that it hasn’t been doctored. Because it’s easy to fabricate these things now, we can use the technology as a scapegoat to disavow responsibility for our words and actions.

Information literacy includes the skills of selecting and curating information, assessing reliability and credibility, and then using that information in responsible ways. I’m not convinced that it’s possible to do that anymore. And you can’t prove that I’m wrong.

 

Acknowledgment: Almost all of this came from the RadioLab Story “Breaking News.” Those guys do fantastic work. You should go listen.

Also, I have no idea where the Lincoln photo originally came from. It’s literally all over the place. No, I don’t have permission to use it.

Advertisements

Insecurities

Sometimes, the world isn’t a very nice place.

When the Internet was invented, it was a space for collaboration. The technical challenge of connecting disparate computer systems in remote locations was daunting. The goal was to allow researchers at the various locations to work together, sharing data, analyses, and perspectives.

https://www.flickr.com/photos/michaelsarver/62771138The idea that some members of the community would try to exploit the system to gain access to information or resources that don’t belong to them was inconceivable. The researchers and engineers designing the protocols and tools that eventually became the Internet were focused on getting the system to work. They weren’t worried about security.

That oversight is a common thread for innovation. We often underestimate how new technologies will be misused. Einstein famously regretted his work on the atomic bomb. Kalashnikov was horrified that his rifle was used by so many to cause so much terror. Sometimes, we fail to consider the worst consequences of our best ideas. We’re so focused on making the impossible practical that we don’t spend much time considering whether impossible is such a bad thing.

The Internet has struggled with its underlying insecurity for decades. We have replacements for telnet and ftp that encrypt communications to keep anyone from eavesdropping on them. We have https to allow encrypted web traffic. We use WPA to protect wireless traffic. We can even encrypt email if we have to, but almost no one does. Security is still an afterthought. It’s bolted on to a product or protocol after it already works. Because it’s much simpler, the insecure versions are always more reliable and faster and more efficient and more convenient. We often prioritize these things ahead of security, and continue to use technologies that we know will get us into trouble eventually.

The tech industry didn’t learn from the development of the Internet. Operating systems, too, were designed for a single user who has total access to everything, as were phones and tablets. The idea that this computer might be connected to other computers, and that other software and users might exploit their access is often ignored. Even today, we run into a lot of software that won’t work without complete control over the entire computer and everything on it.

On the network side, system requirements for just about every software package we use require us to eliminate all aspects of security. They often require firewall and filtering exceptions that make our systems more vulnerable. When we point this out, we hit a brick wall. If we can’t prove that we’ve followed their requirements to the letter, they won’t help with any problems we may have.


When you’re developing software, if you design it to work first and then try to add in security later, it doesn’t work right. You end up in cycle where you try to make it more secure, but those efforts break some critical functionality. When you fix those bugs, you introduce more security problems. The result is a program that constantly needs updated, but that never really reaches a point where it’s both secure and reliable.

This process used to be hidden from most people through the beta testing process. Back in the ’90s, it was cool to get betas of new software. You could try out new software in exchange for providing feedback to the developers to help them fix bugs and get the product ready for the general public. I remember being excited about new beta versions of web browsers. It was an exciting time when you could get a glimpse of what’s next.

As we’ve moved along, though, it seems like ALL software is beta software now. Each update comes with that wonderful anticipation of the new problems we’re sure to have. The industry constantly tells us we have to keep all of our software updated, but every time we do, something breaks. That’s okay. There’s a new version next week to fix that major problem. And the update next month will fix the security vulnerabilities introduced by this fix.

We’re living in a world where software doesn’t have to work reliably or securely. It just has to be “good enough” for now. Ship new versions quickly and regularly, and don’t worry too much about it. Every time I start up my phone or my computer or my tablet or my Chromebook, I have a nice new collection of crappy software to install.

So what’s the solution? How do we move away from this endless cycle? I think it comes down to the license agreement. You know, those terms you agree to without reading every time software tries to install or update? In Google’s case, the relevant parts are sections 13 and 14 (some of which I’ve left out). They put it in all caps so you know it’s important:

13.3 IN PARTICULAR, GOOGLE, ITS SUBSIDIARIES AND AFFILIATES, AND ITS LICENSORS DO NOT REPRESENT OR WARRANT TO YOU THAT:
(A) YOUR USE OF THE SERVICES WILL MEET YOUR REQUIREMENTS,
(B) YOUR USE OF THE SERVICES WILL BE UNINTERRUPTED, TIMELY, SECURE OR FREE FROM ERROR,
(D) THAT DEFECTS IN THE OPERATION OR FUNCTIONALITY OF ANY SOFTWARE PROVIDED TO YOU AS PART OF THE SERVICES WILL BE CORRECTED.

13.6 GOOGLE FURTHER EXPRESSLY DISCLAIMS ALL WARRANTIES AND CONDITIONS OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO THE IMPLIED WARRANTIES AND CONDITIONS OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.

Translation: I don’t know what you think this software is going to do, or if you’ve bought into all of our marketing hype, but no matter how low your expectations are, you should lower them more. 

14. LIMITATION OF LIABILITY

14.1 SUBJECT TO OVERALL PROVISION IN PARAGRAPH 13.1 ABOVE, YOU EXPRESSLY UNDERSTAND AND AGREE THAT GOOGLE, ITS SUBSIDIARIES AND AFFILIATES, AND ITS LICENSORS SHALL NOT BE LIABLE TO YOU FOR:

(A) ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL CONSEQUENTIAL OR EXEMPLARY DAMAGES WHICH MAY BE INCURRED BY YOU, HOWEVER CAUSED AND UNDER ANY THEORY OF LIABILITY.. THIS SHALL INCLUDE, BUT NOT BE LIMITED TO, ANY LOSS OF PROFIT (WHETHER INCURRED DIRECTLY OR INDIRECTLY), ANY LOSS OF GOODWILL OR BUSINESS REPUTATION, ANY LOSS OF DATA SUFFERED, COST OF PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, OR OTHER INTANGIBLE LOSS;

(B) ANY LOSS OR DAMAGE WHICH MAY BE INCURRED BY YOU, INCLUDING BUT NOT LIMITED TO LOSS OR DAMAGE AS A RESULT OF:

(I) ANY RELIANCE PLACED BY YOU ON THE COMPLETENESS, ACCURACY OR EXISTENCE OF ANY ADVERTISING, OR AS A RESULT OF ANY RELATIONSHIP OR TRANSACTION BETWEEN YOU AND ANY ADVERTISER OR SPONSOR WHOSE ADVERTISING APPEARS ON THE SERVICES;

(II) ANY CHANGES WHICH GOOGLE MAY MAKE TO THE SERVICES, OR FOR ANY PERMANENT OR TEMPORARY CESSATION IN THE PROVISION OF THE SERVICES (OR ANY FEATURES WITHIN THE SERVICES);

(III) THE DELETION OF, CORRUPTION OF, OR FAILURE TO STORE, ANY CONTENT AND OTHER COMMUNICATIONS DATA MAINTAINED OR TRANSMITTED BY OR THROUGH YOUR USE OF THE SERVICES;

14.2 THE LIMITATIONS ON GOOGLE’S LIABILITY TO YOU IN PARAGRAPH 14.1 ABOVE SHALL APPLY WHETHER OR NOT GOOGLE HAS BEEN ADVISED OF OR SHOULD HAVE BEEN AWARE OF THE POSSIBILITY OF ANY SUCH LOSSES ARISING.

Translation: whatever happens, it’s not our fault. Even if we do it on purpose.

The software companies have created conditions of use that eliminate any sense of accountability on their part. They won’t guarantee that their product will do anything, and they won’t be responsible for any damage created by it. Even if they willfully cause problems or data loss, lie to you about the product, and interfere with other technologies you’re using, they have no liability.

I keep waiting for the courts to throw these things out. End users are clicking through these agreements without reading them because they have no choice. They’re not making informed decisions to give away their rights. They’re not so excited to try out new software that they’re setting up test environment that have no important data or work to do. They’re just trying to get to the Internet, to check their email, to open a PDF file, and to get some work done. Where’s the stable, reliable software product that helps them do that?

Without any incentive to ship reliable, stable, secure code, we’re going to continue to be inundated with updates. Every time there’s a security breach or an internet outage or a loss of data, we’re going to blame the end user. “We told you not to trust our software.” “Why don’t you have a backup.” “What do you MEAN you’re still using that horrible old software from next month.” “Don’t you dare delay this update.”

So until something changes, we’ll keep installing updates, and then update the updates. And then reboot to find that there’s a bug fix for the update.

Photo credit: Michael Sarver on Flickr