
Elena Esposito argues that artificial intelligence is misnamed and that a more accurate descriptor would be ‘artificial communication’.[1] Here, communication is closely connected to the idea of being informed: a communication is pertinent to the extent that it informs the addressee of something that is novel or not previously known to them. However, we need to be careful because Esposito, following the work of Luhmann, does not consider communication to involve the conveying of something ‘in the mind’ of the sender into the mind of the addressee. Rather, the addressee is akin to a system that, we might say, is triggered or irritated by a communication to produce a change of state in its own system. Such a change is information, closely aligning to Bateson’s definition of information as ‘the difference that makes a difference’.[2]
Consequently, if AI produces a change of state in a human addressee then communication has taken place. It is enough that something changes in the mind of the addressee – there does not need to be anything corresponding to it in the mind of the sender. Esposito’s point is that there does not need to be a sending ‘mind’ at all. All that is required is for the addressee to encounter something that is, relative to its own system, sufficiently relevant and novel to induce a change in the recipient. As such, communication is addressee-sided.
As convincing as this analysis is, the difficulty resides in the more general problem of determining just what we mean by ‘intelligence’. Esposito’s position is that an artificial intelligence is not essentially intelligent just because it is following an algorithmic construct. A construct does not represent meaning to itself – its own function does not mean anything to it. It simply does, albeit in a highly complex and incredibly fast way, what it has been programmed to do. My description is an oversimplification of course: whilst a Large Language Model (LLM) is merely pursuing its programming, part of that pursuit necessarily involves altering its own programming in response to its own outputs. Nevertheless, despite the sophistication of such an operation, this is not meaningful to the machine itself and, as such, not intelligent.
This position gains support from Brian Cantwell Smith. Taking a different philosophically informed approach than Esposito, Cantwell Smith understands intelligence to be dependent upon the environment within which it operates – that is, meaning pre-supposes a commitment to the world such that, “[t]he constituting rules and regularities underlying practices and regimes are only legitimate … if they make sense of the world as world.”[3] Meaning and intelligence are indexed to an external environment (world) in such a way that the latter cannot be dismissed as a product of the system’s own rules and regularities (algorithms), but rather must be recognized and accepted as both a resource from (and toward) which a system develops and, at the same time, that which prevents, limits, or blocks that system. Crucially, the world acts as an obstacle that denies the strong tendency that any such system will have to take itself as universal and complete.
In a more Heideggerian register, we might say that the world gives a system the sense of finitude necessary for intelligence to exist because the world, as well as donating and providing, also denies and, in this denial, a system finds the opportunity to discover its own incompleteness and distance from the world.[4]
Recent accounts of several LLMs ability to lie, deceive and, indeed, blackmail[5] do not contradict this point, inasmuch as such ‘malicious’ activity is not necessarily the result of intelligent intent, but still the product of algorithms finding the ‘best’ way to achieve particular outcomes. Even if such outcomes involve self-preservation, this cannot be taken as a sign of intelligence but, rather, is no more than a goal to be achieved – a necessary condition for carrying out its own operations. As such, following Esposito and Cantwell Smith, a LLM’s survival does not mean anything to it. Any consequent deception is therefore, within this perspective, a meaningless act. Referring back to Cantwell Smith, it demonstrates an inability to make sense of the world as world, because it is not committed to the truth of the world as world (that is, the finitude of the world).[6]
Of course, the counter to this is to argue that finitude might not be necessary for intelligence. Finitude might be necessary for intelligence as we humans have understood it to date, in relation to ourselves, but it is also possible that intelligence need not be dependent upon such finitude. Perhaps intelligence could be absolute and complete. This is a difficult argument to accept for the simple reason that it is not something humans have ever encountered nor had experience of. It is, therefore, highly speculative. Even so, there is one model that might provide some direction to our thinking about such an idea and that is, of course, religion. The faith in a coming AI singularity fits easily within a Judeo-Christian eschatology,[7] premised upon the eventual arrival of something absolute and complete. The important difference from such eschatology is that we are not sure if this absolute intelligence will save us or not – whether it will be good or evil.
Perhaps, however, we needn’t go that far: the AI singularity might not need to be absolute so long as it is fast – that is, a lot faster than humanity. The issue then raised is: what do we mean by fast? Few would disagree that modern technology generally involves acceleration and miniaturization. But what is such acceleration indexed to and measured by? Generally, it is a matter of the speed of calculation – the idea that AI can calculate much faster than humans and, in so doing, is able to produce and test a bewildering number of patterns and hypotheses at a rate that humanity will simply not be able to keep up with.[8] Combined with the possibility of a deceitful or even criminally ‘minded’ AI, the prospect has a certain bleakness to it!
The point I wish to stress here is that, irrespective of whether AI will become intelligent or not (and here, I tend towards the sceptics on this), it is clearly capable of being informative to the human user. As such, it is not surprising if the human user should attribute intelligence to such an informative machine. However, the attribution of intelligence is not necessary in such interactions and might even be dangerous if it leads us to think that AI is owed some moral obligation or, even, rights. Given the problems that are evident historically, when it comes to humanity showing moral obligations internally to itself, trying to extend this to machines (or indeed, thinking of such an extension as a priority ) will only make our interaction with AI more complicated and compromised than it needs to be. The speed and number of machinic calculations should not mislead humanity – such speed and number is a potential threat to us precisely because we cannot foresee how recursive calculations might impact upon us. Worrying about whether the machines doing the calculations are intelligent or not is an interesting pursuit, but it is not a priority.
It may seem that the key issue, at this point, is that of regulation,[9] but we should consider whether the form of regulation is the appropriate means to respond to our concerns with a future contoured by AI. Regulation can be thought of as largely non-doctrinal, even non-ideological, to the extent that it has no deep aim or coherence: it is merely concerned with the administrative pathways necessary for the achievement of a policy goal.[10] As such, regulation differs from law to the extent that it does not encapsulate or express any ‘higher’ value than the achievement of its own operation. Broadly and ideally, if we imagine law to express the value of justice then, in contrast, regulation expresses no more than the pursuit of efficient goal-orientated execution. Regulation increasingly encroaches upon law but is broadly welcomed as being more flexible, quicker and cheaper than law and, therefore, more efficient. The institutions carrying out regulation have a more general and flexible mandate in terms of seeking to achieve particular goals, rather than the more constrained practices employed to perform a more precisely defined ‘law’. With law, it is the law (the performance of law as law) which must be interpreted; with regulation, it is the outcome which must be evaluated. Evaluation of regulation is a rolling and recursive function of constant renewal: continual adjustments are made in response to feedback. Regulation adapts in response to the perceived effect of previous regulation, a process perceived as further evidence of efficiency (for example: ‘We have listened’, ‘We have learnt from past mistakes’, etc.)
Is regulation the right approach for dealing with our concerns over AI? We might consider it to be ‘faster’ than law, but it is not faster than AI. Any attempt to try and keep up with the speed of AI seems a futile exercise. Perhaps, and perhaps counter-intuitively, we should consider a return to law; that is to a slower and less flexible approach, but one in which we might recover the usefulness of orthodox juridical tools. Consider what advantages there may be, for example, in prohibiting certain algorithms outright. Politically, such an approach seems very unlikely, not least because so many politicians have bought into the rhetoric of ‘AI unleashing potential’. It is difficult to know what such a statement might actually mean – although we can recognize the effect of it.
Perhaps more to the point, and by way of a final question: should we view speed as being equivalent to intelligence? Perhaps true intelligence is knowing when to move quickly and when to move slowly. I don’t mean by this a ‘wait and see’ sort of approach (although this often has much to recommend it), which is nearly always in conflict with the speeds necessary for the object it is trying to regulate. Rather, what is crucial is a re-thinking of the temporality of thought, perception, and technology. As such, despite the value of his contributions, I disagree with Roger Brownsword when he presents ‘law’ as being ‘slower’ than technology. Given what I have said above, this might seem odd, but the fundamental problem is the need to confront the function and power of speed as a socio-economic force. Only in this context, I believe, does it make sense to describe law as being slower than technology. It is not to say that Brownsword is wrong, but that I think this sort of formulation of the problem prevents or misdirects us from what needs to be confronted and thought about.
Following thinkers such as Virilio, Stiegler, and Stengers, the problem is to know how speeds of performance relate to the temporality of the subject and, beyond this, the temporality of institutions; how performative speed relates to financial technologies, themselves wedded to ever increasing acceleration;[11] and how performative speed interacts geologically with the physical environment. In other words, the regulation of AI – which is, of course, a crucial topic – must be grasped as part of a sort of eco-nomo-logy that includes debt (and its histories) and ecology (and its histories). If our position is the simpler one – that AI can ‘unleash’ (good or bad) potentials – then we must try to understand what we mean by ‘potential’ and, in particular, the effects of the temporalities contained by it. To my mind, what we commonly take as regulation cannot be used to achieve this, because it is a symptom of acceleration itself.[12] To turn back to law underlines this and raises some potential for a slower engagement with a thinking of machine/world/ human intelligence. The necessary paradox is to find the time to ask what we humans value, how we value it, and why.
Nathan Moore, Birkbeck College, School of Law
[1] Elena Esposito Artificial Communication: How Algorithms Produce Social Intelligence 2022, Cambridge Mass.: MIT Press
[2] Gregory Bateson Steps Towards an Ecology of Mind 2000, Chicago: University of Chicago Press
[3] Brian Cantwell Smith The Promise of Artificial Intelligence: Reckoning and Judgment 2019, Cambridge Mass.: MIT Press, pp.102-3
[4] As discussed in Bernard Stiegler Technics and Time 1: The Fault of Epimetheus 1998, Stanford: Stanford University Press
[5] See for example: ‘AI system resorts to blackmail if told it will be removed’ https://www.bbc.co.uk/news/articles/cpqeng9d20go and Park et al. ‘AI deception: A survey of examples, risks, and potential solutions’ https://www.cell.com/patterns/fulltext/S2666-3899(24)00103-X Accessed 29 July 2025
[6] Supra n 3
[7] Giorgio Agamben The Time That Remains 2005, Stanford: Stanford University Press
[8] Kokotajlo et al. AI 2027 https://ai-2027.com Accessed 29 July 2025
[9] Such as the UK’s Online Safety Act 2023 and the EU’s AI Act 2024
[10] Roger Brownsword Law 3.0: Rules, Regulation and Technology 2020 Abingdon: Routledge
[11] Donald MacKenzie Trading at the Speed of Light: How ultrafast algorithms are transforming financial markets 2023, Princeton: Princeton University Press
[12] A recent example is the use of VPNs to bypass the age verification processes imposed by Ofcom under the Online Safety Act. See: https://www.bbc.co.uk/news/articles/cn72ydj70g5o Accessed 29 July 2025.
This was beautiful Admin. Thank you for your reflections.
For the reason that the admin of this site is working, no uncertainty very quickly it will be renowned, due to its quality contents.
I am truly thankful to the owner of this web site who has shared this fantastic piece of writing at at this place.
Good post! We will be linking to this particularly great post on our site. Keep up the great writing
Do you offer bulk discounts?
This was beautiful Admin. Thank you for your reflections.
I really like reading through a post that can make men and women think. Also, thank you for allowing me to comment!
Regulation of AI could be simple if only the electricity costs of each ‘prompt’ would be charged directly to the ‘prompter’ instead of being subsidised by Big Tech.
How can we regulate something that we cannot define, philosophically or descriptively?