Search This Blog

Friday 11 January 2019

Artificial Intelligence, Human Sources, and Processing-power asymmetry: the metrics of the AI Ethical challenge.


Part of my existing research plan includes research about the ethics of information transmission. What does this trivial sounding description mean? In order for information - according to most contemporary competing descriptions of the nature of information* - to be deployed or effective (either in accordance with teleological content or on the basis of cause-effect complexes), it needs to be encoded and transmitted some distance. Even reading directly from a source requires this. (*There are dozens of conceptions of the nature of information, but most require transmission capability, including statistical, algorithmic (Kolmogorovian and Solomonovorian), semiotic-significatory (Peircian), and teleosemantic.) Where semantic information (I have argued [1] [2] that all information is inherently and intrinsically semantic on a causal and indicative basis) is transmitted, there is an ethical dimension whenever any human agent is, or is part of, the source of that information. Any information generated by, encoded from, or originating at a human agent source is ethical because it intrinsically indicates something about that agent/person. 
Artificial intelligence algorithms, and especially those that involve recursively self-modifying code, will be able to manipulate this information and make powerful predictions based upon it. Our ability to comprehend or even detect how this is achieved may in some cases be very limited or non-existent, even with computer and AI-aided analyses.  This is probably going to be especially true of AI algorithms running on quantum computing platforms. Such AI (especially strong AI if, and when, it eventuates) will be able to use and manipulate both the most basic, and the most complex, of signals and messages emitted or transmitted from a human-agent source, and do so in ways that human agents are not even capable of tracking or analysing (not without computer aided analysis involving other AI or deep learning algorithms.) 
There will be a high-risk epistemic and processing/cognitive-power asymmetry between AI algorithms of this kind and the human sources whose information they process. If we apply any one of a number of metrics to the disparity or distance between the processing power of AI algorithms and their human target sources, this asymmetry is likely to increase at non-linear rates, probably far outstripping the non-linearity of the well known Moore’s law for the increase in computing power over time.

[1] https://ses.library.usyd.edu.au/handle/2123/19601
[2] https://link.springer.com/article/10.1007/s11229-014-0457-7