Who and What Are Worthy of Trust?

Author: C. Warren Axelrod, Ph.D., CISM, CISSP
Date Published: 20 February 2020

It has become extraordinarily difficult to distinguish between whom or what you can trust, and who or what is out to get you. Indeed, the 29 December 2019 issue of The New York Times included a section highlighting “A Decade of Distrust,” which featured the following quote from Michiko Kakutani:

When the 2010s began, we believed that technology would save us, that American divides could heal, that international institutions and alliances could deliver change. By 2016 that had all fallen apart. Everything has only gotten worse since then. But what comes next?

It might be fairer to say that not everyone believed in technology’s salvation abilities in the first place. It has been very apparent to some of us for decades that political systems have lagged far behind explosive technological change and government institutions are falling back even further as time goes by. It has become commonplace to distort facts, twist beliefs, misrepresent opinions and otherwise try to fool recipients of information with misinformation, especially as enabling tools are so readily available. There appears to be little effective oversight by lawmakers, regulators and major distributors of information. Sources can be easily spoofed and content can be falsified. The net result is that increasing numbers of individuals and organizations are tricked into believing false statements and images, responding to them at their peril.

Such situations are unseemly at best, but they are particularly insidious when they involve individuals and institutions in which we have traditionally vested trust, such as those who pretend to defend us against such duplicity. It is when your protecting agents turn out to be involved in subversion that one becomes particularly vulnerable and gullible. We are all susceptible to such trickery—and perpetrators know that well.

In addition to direct human involvement in generating nefarious come-ons, we have computer systems that can pervert the truth or take advantage of the more susceptible among us. Some of this may be attributed to faulty system design and coding, which is bad enough, but even more disturbing are the algorithms that pretend to provide impartial evaluations. These algorithms are developed by engineers who have their own biases that, intentionally or inadvertently, are inserted into the algorithms. Furthermore, artificial intelligence (AI) and machine learning systems are regularly trained on historical data, which may themselves be slanted because of bias in sample selection.

Traditionally, we have been taught to observe security hygiene’s best practices, such as complex passwords, two-factor authentication and the “smell test,” but such approaches are only partly effective, as evidenced by the continuing successes of cyberattacks and the proliferation of misinformation.

What we need is a much better understanding of attackers’ and defenders’ motivations and motives, so that questionable activities can be addressed more directly. The higher the potential haul, the more effort will be expended by attackers and the relatively less impact defenses will have. Therefore, effective deterrents need to be developed and implemented to shift the balance in victims’ favor. This is no trivial task. It will involve high levels of global political, economic and technical commitment and participation.

The purpose of my recent Journal article on this topic is to increase readers’ awareness of why attacks happen and why defenders might not be up to doing their job. It is difficult but necessary to confront this reality if we are to mitigate successful attacks and discredit untruths.

Editor’s note: For further insights on this topic, read C. Warren Axelrod’s recent Journal article, When Victims and Defenders Behave Like Cybercriminals,” ISACA Journal, volume 1, 2020.