In the News: AI Can Be Used to Create Extremely Realistic Forgeries

Authority Magazine profiled Justin Sherman, Ethical Tech’s Co-Founder & Vice President, for their “Future is Now” series with experts on bleeding-edge technological breakthroughs.

Fake news is already a problem. Recently, we saw this particularly illuminated by the White House press secretary’s usage of a fake video to justify banning a reporter. But deepfakes are going to make the issue much, much worse — because even critical consumers of information may not be able to tell if something is real. For many reasons, this could spell disaster.

World leaders have themselves demonstrated how Twitter and other social media platforms can dangerously propagate fake news with potentially devastating consequences. (In December 2016, a fake article claimed a former Israeli defense minister had threatened nuclear attack on Pakistan if it sent troops to Syria. Pakistan’s Defense Minister, not yet realizing this was fake, tweeted “Israel forgets Pakistan is a nuclear state too.”) Imagine, then, that it’s not just a sketchy news article or a poorly Photoshopped image that makes the rounds online — instead, it is a highly realistic (but fake) video of a world leader declaring war, which is then covered by a news outlet looking to “break the story” first, setting off a cascade of coverage. This is just a taste of some of the global security implications of this technology, not to mention what impact it could have on public discourse and the free press.

Read the full interview here.

Facebook, Google Invented Digital Crack, and Now We're Addicted to Lies

Bob Sullivan, Cybercrime & Consumer Tech Advisor, wrote for The Red Tape Chronicles about society’s addiction to confirmation bias and social media’s role in the expansion and deepening of this phenomenon.

I’ve written many times that consumers have been hacked. Corporations with billion-dollar research budgets have poked and prodded us for decades, exposing every weakness, turning your human nature against you and exploiting that systematically.  After reading this excellent Washington Post story about the spread of conspiracy theories online, I worry that this hacking process is almost complete.

Read the full article here.

Far Too Often, Big Data is Bad Data

Bob Sullivan, Cybercrime & Consumer Tech Advisor, and Justin Sherman, Co-Founder & Vice President, wrote for Real Clear Policy about the problem with a “big data” mentality.

Big, sloppy datasets have reinforced prejudice, engendered black boxes, and contributed to flawed systems: recidivism predictors that recommend discriminately longer prison sentences for black convicts, chatbots that learn how to be racist and misogynistic in less than 24 hours, cameras that think Asians are always blinking, photo apps that label black people as gorillas, search engines that show men as CEOs at higher rates than women — the list goes on and on. (It has even been suggested that searching for “black names” will show ads for criminal background checks.)

Still, many fall for the seduction that data itself is infallible. It’s not. That insight is hardly new, but it’s a lesson humans need to keep learning over and over.

Read the full article here.

In the News: Ensuring Online Safety at Work

For a “best practices” guide for cybersecurity awareness month, Justin Sherman, Ethical Tech’s Co-Founder and Vice President, was quoted in The State of Security about balancing online safety between home and work.

Don’t email work documents to your personal email account. Not only could this be detected by your employer—which could get you in trouble—but this is actually a security concern as well! Your personal device that you later use to open the file will probably have different security standards than your device at work, which means you could risk compromising sensitive information by accessing it on a weakly-protected system. As much as possible, keep your work email for work stuff, and keep your personal email for personal stuff!

Read the full article here.

Future-Proof Learning

As part of Duke University’s NextEd Fest on “Future-Proof Learning,” Justin Sherman, Ethical Tech’s Co-Founder and Vice President, paneled for a discussion on “AI and Humanity: Ethical Impacts and Challenges.” Aria Chernik, Ethical Tech’s Education Strategist, moderated.

The discussion ranged from existential threats to humanity posed by artificial general intelligence (AGI) to bias in machine learning systems to integrating technology into college coursework.

Check out the full event details, and a video link, here.

We Can't Just Blindly Trust Algorithms

In response to a July 2018 Harvard Business Review article titled “Want Less-Biased Decisions? Use Algorithms,” Ethical Tech’s Co-Founder and Vice President Justin Sherman wrote for The Startup about the broader conversations on technology and inequality.

In the broader conversations about machine bias — and how technology is impacting equality around us — we must not forget the concentration of technological decision-making power and the perceived objectivity of the systems produced from those decision-makers.

Read the full article here.

Why We're Crowd-Sourcing Ethical Tech Education

Justin Sherman, Ethical Tech’s Co-Founder and Vice President, wrote for The Startup about his project with Aria Chernik to create the first open source “Ethical Tech 101” college class curriculum—specifically, why the first round of development is crowd-sourcing topics and resources.

We’re not crowd-sourcing topics and resources because we don’t have our own ideas; we absolutely do, and our friends, advisors, and collaborators would certainly provide enough information and expertise to develop a class on our own. Instead, we want to make sure we’re avoiding gaps and minimizing the impact of our own biases. After all, if you work together — or work in similar areas — it’s all too easy to forget about a topic, exclude a perspective, or never see a new angle on the issue. And since so many problems with technology, arguably, come from the homogeneity and exclusivity of the conversations and decisions about its innovation, consumption, and regulation, we need to avoid those same issues as much as possible too.

Read the full article here.

In the News: Implementing Security in the DevOps Lifecycle

Justin Sherman, Ethical Tech’s Co-Founder and Vice President, was quoted in The State of Security about best practices for integrating security into the DevOps lifecycle.

“Implementing security into the software design process is absolutely essential; however, it is by no means an absolute fix and must be implemented carefully. For instance, the majority of software developers do not have adequate training in security. They cannot be reasonably expected to suddenly write secure software when that hasn’t been the practice before, but it might be tempting to set this expectation anyway.”

— Justin Sherman, Co-Founder & Vice President at Ethical Tech

Read the full article here.