The Model of Confidence. Introduction. Fundamentals

Alice's transactional assessment of her confidence in Bob is an exercise that she is performing in her mind - quite often fast and possibly partly subconsciously. We do not have access to details of this process (probably even Alice cannot tell us everything), but we can speculate about its nature and its details.

This model is an outcome of such speculative works. It does not mean that Alice comes to her assessment exactly through the process described by this model (this is actually quite unlikely), but only that the model is a sufficient explanation of this process.

The main purpose of the model proposed here is to serve as a tool for a better understanding of the human process of the decision regarding confidence. Another goal of this model is to bridge the gap between the sociological and psychological perception of trust (confidence) and the perception of confidence as a measurable property of digital systems. The model can be used to explain phenomena of confidence in different application areas related to the Internet or other forms of digitally-mediated interaction such as e-commerce, e-government, e-health, etc.

The model uses the complexity as both the underlying driver for confidence and as the limiting factor. Within the model, complexity is exchanged for the better assessment of confidence (the one with lower uncertainty) and such assessment is built through trust or through control, on the basis of available evidence.

This chapter starts with the review of fundamental concepts (discussed earlier). A short discussion of trust and control as well as a discussion of the relationship between complexity and confidence follows. The model is presented next, including the identification of entities, evidence and enablers as well as some formalisation of the concept. The extended discussion of trust and control closes this chapter.

Fundamentals. The starting point for the model comes from Luhmann's observation [Luhmann1979] that trust arises from our inherent inability to handle complexity, specifically the enormous complexity represented by all the possible variants of the future. Illustrating this concept, let's consider Bob that has certain amount of independence - he is an intentional agent.

A human is an excellent example, but social groups, complex technical systems [Dennett1989] or animals may be considered as well. Bob's independence is demonstrated by externally perceived uncertainty about the future action that can be taken by him (sometimes referred to as 'free will').

Confidence. Let's further assume that Alice (a person) is pursuing her goals and - in order to reach those goals - she needs certain courses of action in the future. She may be after something very simple, like safely crossing the street or something complicated, like receiving her degree. Alice may be unwilling to proceed unless she has certain confidence that the beneficial course of action will happen - the driver will slow drive, her tuition has been paid, etc. Her goal is contingent on Bob's actions - Bob can make or break her ability to achieve her goal. Bob can suddenly accelerate while driving his car or Bob may embezzle money that Alice has been saving.

Trust and Control. Alice may have some reasons to expect that Bob will actually act in her favour. For example, Alice may have her ways to influence Bob (she may cross the street next to a policeman) or she knows Bob well enough to predict the course of his actions (she has chosen the reputable bank). If Alice can reasonably enforce or expect a favourable action of Bob, then Alice has control over Bob, usually using specific instruments of control (a policeman, the bank's reputation). On the other hand, if the behaviour of Bob cannot be reasonably controlled but depends only on his intentions, Alice must resort to trust, trusting that actions of Bob will be favourable.

This leads to an interesting relationship between trust and control, observed also e.g. by Castelfranchi [Castelfranchi2000]. Trust is in fact a deficiency of control that expresses itself as a desire to progress despite the inability to control. Symmetrically, control is a deficiency of trust. This resonates with Cummings [Cummings1996] who identifies trust as a cognitive leap of faith beyond reason. Deutsch [Deutsch1962] has also noticed the complementary relationship between both. This leads to an additional observation that even though trust cannot be analysed in the same terms as control, both must share certain common characteristics.

Separating decisions regarding the outcome of some future-looking actions between the part of the confidence that is control-based and the part that is trust-based, we can assume that our confidence is driven by the sum of confidence from both sources: control and trust. We do not define 'sum' in arithmetical terms - in fact, we do not define any particular method to 'add' trust, control or confidence. However, we can observe that trust can be a substitute for control and control can be used as a substitute for trust.

Transactional Horizon. Alice's success within a transaction is likely to be contingent not only on Bob, but also on other entities. For example, Alice's safe passage depends on the weather, street lights, policeman, other drivers, time of day, etc. Unfortunately, Alice cannot include everyone and everything in her reasoning - it would be too complex.

Therefore Alice decides on the transactional horizon of what she wants to include in her reasoning. The extent of such a horizon depends on her disposable complexity, her ability to manage the number of entities and relationships between them.

Evidence and Enablers. Let's now finally have a look at what the foundation of Alice's knowledge is in both areas of control and trust. Usually Alice is in possession of certain evidence that can be used to support her decision. Even though no amount of evidence is sufficient to guarantee the future, Alice is always able to reason on the basis of available evidence - some evidence seems to be better than none at all.

In order to process, Alice builds on four enablers. She must be reasonably confident about the availability of evidence. Similarly Alice must be reasonably confident about Bob's identity - otherwise she cannot consistently infer evidence about Bob. As she is assessing Bob's intentions, she should be confident about Bob's ability to have intentions - that Bob is similar to her perception of what he is. Finally, she must be confident that others who deliver opinions about Bob are honest - that they tend to tell her the truth.

 






Date added: 2023-09-23; views: 202;


Studedu.org - Studedu - 2022-2024 year. The material is provided for informational and educational purposes. | Privacy Policy
Page generation: 0.015 sec.