Agents Alice and Bob

Alice and Bob cannot just be anyone if we are to discuss the confidence between them. We do not expect a book to trust the table or a cup to control tea (even though, to a certain extent, those examples do not seem to be entirely irrational). Discussion about trust and control is restricted to entities that can hold beliefs and opinions and to entities that may or may not act the way they are designed: to cognitive agents and intentional agents.

This restriction does not exclude inanimate objects from the discussion. Even though a book cannot trust the table, it may be convenient for us to perceive them both as agents of assumed quality and then extend the notion of trust to them. Concepts such as trust, confidence or control are often applied to activities of non-human, inanimate or social objects. We can trust our car, be confident in the postal system or control the computer. The anthropomorphic intentional stance [Dennett1989] may come handy if entities are too complex to be fully understood. This is particularly true when we start dealing with complex digital systems that tend to behave in an unpredictable ways.

Cognitive Agent Alice. The belief of confidence requires at least the ability to have beliefs - internal states of mind. If we want to talk about Alice being confident about someone, then we must attribute to Alice the ability to hold beliefs. Certainly we can have beliefs and we can infer (through assumed psychological similarity) that other people can have beliefs. So, the concept of people being confident about other people seems to be justified. We can extend this concept to cognitive agents [Castelfranchi2000] to cover not only humans but also other entities that we attribute with the ability to have beliefs.

Moving away from natural cognitive agents (people) and simple systems (such as cars or elevators) towards more complex systems (such as software agents or 'the Internet') we enter a realm where entities seem to be able to reason, yet we are aware that such reasoning is not 'natural'. It might have been programmed into them or it might have been the outcome of a collective mindset. The borderline between what constitutes cognitive agent and what is not cannot be clearly outlined: it is our perception whether the cognitive explanation seems to be plausible and beneficial.

Note that if we decide to consider artificial Alice (e.g. a software agent that is supposed to mimic Alice's decisions), we are gaining the opportunity of a direct inspection of Alice's 'mental states' - something that we cannot do with a human. However, this will help us less than we expect - if we want to re-create Alice's reasoning in bits and bytes, we should first understand how the real Alice thinks. If we decide to call certain internal variables 'trust', 'confidence' or 'control', then we simply assign attributes that are convenient for us to states that are available to us, with no guarantee that they have any relevance to what humans are doing.

Intentional Agent Bob. Bob does not have to hold beliefs - unless we would like to consider the mutual relationship between Alice and Bob, beyond the uni-directional Alice's assessment of her confidence in Bob. However, Bob must be able to have intentions and express certain free will, i.e. Bob should be able to determine his goals and act upon those goals. If Bob is not attributed free will, then his actions cannot be explained by his intentions, he cannot be trustworthy and Alice has nothing to estimate - her estimates of his trustworthiness are void.

If Bob does not have intentions, then his behaviour should be explainable entirely through control, mostly by Alice's knowledge about Bob. If she knows him, then she can always predict his actions. This deterministic explanation of Bob's behaviour applies well if Bob is a relatively simple entity, but fails if Bob is a complex, even though a deterministic one. What is complex and what is simple depend only on Alice's perception of Bob.

Let's consider an example - Bob is a simple dice-tossing software program, that draws numbers from its pseudo-random number generator. From the perceptive of software developer, Bob's behaviour is completely deterministic, and (assuming a known starting point), the sequence of numbers is completely predictable. From Alice's perspective, however, Bob's behaviour is a mystery, as the generator provides a sequence that seems to be random. As Alice is unable to determine any visible rules in Bob's behaviour, she may even overinterpret Bob and attribute intentions, free will or even with a supernatural power [Levy2006] to him.

Let's consider another example where fully predictable Bob-computer suddenly significantly slows down. Certainly, Alice may attribute this behaviour to some technical problem, possibly incorrect settings, overheating or maybe some form of security breach. However, this requires from Alice expert knowledge. It may be easier for her to endow the computer with intentions and develop explanations that make use of such intentions (e.g. the computer does not like Mondays). Technical fault is interpreted as a sign of free will.

In both cases Alice interpreted Bob incorrectly (from the perspective of an expert), but in both cases Alice came up with an explanation that better suited her needs. Her intentional stance [Dennett1989] allows her to deal with Bob on known terms that are derived from human relationships. Surprisingly, Alice's strategy is rational: by altering her perception of Bob and by introducing intentions, she is able to decrease the perceived complexity of Bob so that she can reason about him. Without an intentional stance, the complexity of Bob would overwhelm her cognitive capabilities.

 






Date added: 2023-09-23; views: 228;


Studedu.org - Studedu - 2022-2024 year. The material is provided for informational and educational purposes. | Privacy Policy
Page generation: 0.012 sec.