Two Trusts. Complexity and Confidence

The model presented here proposes that trust and control together contribute to the perception of confidence, driven and limited by complexity. The concept of 'two trusts' (i.e. trust and control) is not a new one, even though the names presented here may be different from what can be found elsewhere. The understanding that one should mix trust in intentions with a certain reinforcing structure provided by the environment is clearly visible throughout the literature.

What is called 'trust' here can be related to the intrinsic properties of a person while 'control' roughly relates to contextual properties (see [Riegelsberger2005] for an overview). What we call here 'control' can be also known as control trust, reliance trust, assurance- based trust, calculus-based trust, guarded trust, deterrence-based trust or cognitive trust. In contrast, 'trust' is known as party trust, affective trust, identification-based trust, relational trust or extended trust.

This proliferation of terminologies has been one of the drivers to introduce the construct of confidence in the place traditionally taken by trust and to restrict trust to what is related to the intentions of a person. The construct of control is therefore used to express what is enforced on a person. Even though it may lead to certain initial complications, the clarity provided by this justifies such a terminological decision.

The relationship between trust and control (expressed directly in those terms, no longer hidden in different flavours of 'two trusts') has also been widely studied. Apart from the general observation of a substitutive relationship between them, there is no general agreement about the exact nature of the relationship (see [Frankema2005]). Opinions range from oppositions [Child2001], dualism [Das1998] to duality [Mollering2005], [Farrell2004] where they are not mutually exclusive, even though the existence of control may render trust irrelevant.

The proposition here is as follows. Trust and control both contribute to the confidence (the substitutive approach), but while control is reducible to trust, trust cannot be reduced to control, due to the instrumentalisation of control. We are less concerned here with the influence of institutions, that are seen here mostly as instruments of control, not as a means to preserve a common identity.

Complexity and Confidence. While we are concerned with Alice in a rich context of a transaction, we are here mostly interested in her confidence in Bob. We recognise (and Alice recognises it as well) that Bob is the main source of her uncertainty regarding the future, so that the extent she is confident in him will be essential to her transactional confidence.

One of the main propositions of this book is that Alice can exchange her affordable complexity for the better assessment (lower uncertainty) of her confidence in Bob. Note that we do not propose here that she can actually 'build' or 'increase' her confidence by spending additional complexity, something that may seem to be contradictory to everyday expectations where e.g. promises of increased control are supposed to lead to greater confidence.

Examples. Let's take two examples. In the first, Alice is considering a transaction with an Internet merchant that she has never dealt with before. Her confidence in a merchant is rather low, but she has decided to spend some of her complexity (e.g. time, resources) to ask her friends about this merchant. Her friends reassure her that the merchant has a good reputation, Alice's confidence grows and she is willing to engage in the transaction. Additional complexity seems to increase the confidence.

In the second example, Alice is looking at the web page of a reputable company (this is actually a real case). The page states that the communication between Alice and the site is secured. Alice's confidence is therefore high and she may be willing to engage in the transaction. However, Alice decides to spend some of her affordable complexity and check what her browser says about the connection. To her amazement, the browser claims that the connection is not secured. Here, Alice's confidence decreases because she has afforded additional complexity.

Proposition. The proposition here is that for every entity there is (potentially unattainable for Alice) an optimum level of confidence that is appropriate for it, i.e. the level where Alice's confidence exactly matches its trustworthy behaviour and where Alice's trust matches its trustworthiness. Within the scope of the transaction Alice may have problems in determining this level, as she may have insufficient information. By affording additional complexity, she can expand her horizon beyond what is narrowly available and she can gather more information, thus allowing her to adjust her confidence.

Alice can simply include in her horizon what is directly relevant: Bob, the world, etc. However, the more powerful method that Alice is using to expand her horizon is to use instruments that involve other entities: other's opinions about Bob, others acting as enforcers or guarantors, etc. Those agents become a part of the transaction in addition to Bob, i.e. Alice is effectively re-defining what she means by the transaction. Adding those agents is associated with a growing complexity, as Alice must manage them and she must assess her confidence in them (potentially on the basis of yet another agent). However, by expanding the scope of transaction Alice is able to benefit from a more complete and holistic view of what she can expect.

Considering the second example, Alice has moved from the obvious (what the website has said) and expanded the scope to add her browser. Alice may not stop here: she may add her bank that probably provides certain insurance, she may consult her friend to learn how risky it is to deal over the unprotected link, etc. In this process, she is building the relatively complex horizon of a transaction that includes not only her and the website but also her browser, her bank, her friend, etc. While she is still concerned about her confidence in the website, her reasoning includes several other entities.

If Alice does not have much complexity to spend, we can easily see that her assessment may be off target: it may be both wrong (under- or over-estimated) and uncertain - e.g. Alice will build it only on the impression that is available to her within the narrow scope of the transaction. What is, however, the situation if Alice's affordable complexity is unlimited and she is wiling to spend it on the transaction?

She may expand the scope to include other entities and ultimately she may gain all available information about Bob and all control about Bob she can possibly has. Her knowledge of all possible interactions will give her exact and certain understanding to what extent she should be confident in Bob, but nothing more.

 






Date added: 2023-09-23; views: 228;


Studedu.org - Studedu - 2022-2024 year. The material is provided for informational and educational purposes. | Privacy Policy
Page generation: 0.011 sec.