Confidence as a Strategy. Pragmatics of Confidence
There are potentially several reasons why Alice may want to be confident about Bob. We can distinguish two main lines of strategies that will be apparent throughout the book.
First, Alice can treat confidence as a limiting factor in her dealings with Bob, i.e. she may decide that she will not proceed unless she can attain sufficient confidence. In this case, Alice sets the required threshold value upfront and tests whether she is able to reach it. This is a luxury approach, as it assumes that Alice is able to decide whether to proceed with Bob or not - so that her action is not essential to her. Say, Alice may decide not to buy a new pair of shoes if a shop assistant does not inspire sufficient confidence - but she already has another pair.
Second, Alice may use her confidence as a selecting factor, so that she is willing to proceed with the one that inspires the highest confidence, regardless of the actual level. This can be somehow defined as a strategy of necessity, as in this case Alice does not know what the minimum level of confidence is (as she does not define any), but she uses confidence in a comparative way among contenders. Apparently, in order to employ this strategy, Alice should have a choice so that she can choose between e.g. Bob and Dave. Say, Alice is hungry and would like to choose a restaurant that she is most confident about. Apparently, she will choose a certain restaurant (she cannot afford to wait), it is only a question which one.
What if Alice must proceed but there is only Bob to choose from? Apparently, in this case she does not have to be bothered with any evaluation of confidence as she is not in a position to make a decision anyway. For her own peace of mind she may pretend that she is confident about Bob and for the benefit of her experience she may evaluate her confidence - but neither has any practical bearings. This is one of those cases where her confident behaviour is not driven by her confidence [Deutsch1965].
Bob can support Alice (by expressing trustworthy behaviour) for several reasons, possibly too numerous to be discussed here. He may feel an ethical obligation or he may be driven by greed. He may understand that his interest is aligned with Alice's [Hardin2002] or he may be afraid of a retaliation in a case of default. If Alice is interested in a one-time relationship, either will work for her. If Alice prefers a long-term relationship, she may be willing to explore Bob's motivation to see whether she may increase his trustworthiness.
No Confidence. There is an interesting question whether Alice can live without confidence (specifically without certain level of trust). It seems (e.g. [Kipnis1996]) that such a life is potentially possible but likely not desired. To live without confidence means to give up expectations regarding others or to rely entirely on faith rather than on confidence (so that no evidences are sought and no relevance to the real world is expected).
The life without confidence may eventually lead to paranoid states where a person is unable to develop any confidence in an external world and thus either isolates herself in a controllable world or accepts the randomness of her future with no hope of establishing any expectations.
Looking at digital systems, we can see again that there is little difference: if the system is not confident about information that flows from another system, then the only solution is to ignore it. It may work well for isolated systems, but this is hardly a proposition for a converged, networked world.
It is possible to strike a balance between trust and control within the scope of confidence. Alice can offset her lack of trust with increased control over Bob or she may decide to trust Bob and relax her control. She may also try to reduce her dependence on Bob, shift it towards more trusted subjects (e.g. she may prefer to be dependent on Carol, but not on Bob), etc. In other words, even though life without any need for confidence seems to be an unlikely proposition, Alice has a certain flexibility how, whom, and to what extent she would like to trust and control.
Specifically, we can see the tendency to talk about trust but implement control - in the form of surveillance systems, access codes, complex authentication rules, audit trials, etc. This is discussed throughout the book, but again there is little difference between our social behaviour and the technology - in both cases we can see the desire to reduce the dependence of trust (so that only the minimalist 'root trust' is allowed) and build on control.
Pragmatics of Confidence. This book intentionally stays away from an ethical judgement of trust, control and confidence. It is widely accepted that trust is morally good, superior and desired (and therefore that distrust is bad, inferior and unwanted (e.g. [Kipnis1996]). Even though there are more diverse opinions about moral values of control, the book offers no judgement regarding those beliefs and there will be no further discussion about them (with the exception of distrust). The interest of several researchers seems to be in improving society, mostly by maximising average level of trustworthiness, with a certain disregard to the needs of the individual.
The book does not aim to show how to achieve the all-trusting society, or even the alltrusting individual. Instead, the pragmatic and personal approach is taken. We are interested here only in Alice's welfare that we understand as her ability to make a right decision about her confidence in Bob. What we maximise is not a general trustworthiness but Alice's ability to properly assess her confidence in Bob. The better she is able to judge Bob, the better it is for her. Whether she ends up being confident about him or not is secondary. It is also secondary whether she may end up trusting him or controlling him.
The way the problem is treated in this book borrows heavily from engineering. Confidence is viewed here as an optimisation of a decision-making process while facing bounded rationality. The main question is how Alice, having limited resources, can achieve the level of a rationally justified confidence in Bob, where her trust matches his trustworthiness and her confidence matches his trustworthy behaviour.
Neither trust nor control are considered particularly beneficial, morally superior or desired for an individual or for society. The overall benefit is seen in the ability of every Alice to properly assess every Bob, potentially through a proper balance of trust and control. There is an obvious and immediate benefit for Alice, but Bob and the whole society benefit from it as well. Strange as it is, unbounded trust and trustworthiness can possibly be detrimental to Alice, Bob and to all of us. We will explore this claim below.
Let's first assume that Alice is selfish and is driven only by her self-interest. She actually has two interests. One is to achieve what she desires, another is to make it at the minimum expense (so that she may e.g. pursue other goals as well). This calls for the low complexity of the desired relationship with Bob - something that can be delivered by trust rather than control.
Apparently, Alice benefits significantly from being able to determine whether she can trust Bob. She can achieve her goals easily if Bob is very trustworthy - he will help her just because this is the way he is and she will not have to spend her resources on controlling him. Due to the directional nature of the relationship, we do not assume anything more from Alice. She does not have to be trustworthy or reciprocate Bob's trustworthiness. She therefore has a strong incentive to become a free-rider, the person that benefits from other's trustworthiness without contributing anything.
Alice may not be selfish and may want to go beyond the one-time contact with Bob to invest in the long-time relationship. She can nurture Bob's trustworthiness with the expectation that in future she will benefit from it. For Alice, this is an investment proposition: she invests her time, effort (and possibly some other resources) expecting payback in the future. Even though it looks better, the endgame is the same: trustworthy Bob and Alice exploiting him.
Surprisingly, then, Bob's high trustworthiness is not beneficial for society, as it makes Alice live off Bob, against his will, thus decreasing his ability to contribute to the society. Alice may gradually become interested (and skilled) in exploiting others without contributing anything, thus becoming a ballast to the society. It is neither beneficial for Bob (he is actually exploited) nor for Alice (whose standards may deteriorate). So, Bob's unbound trustworthiness, if met by selfish Alice, leads to damage done to both (as well as to society).
Let's now consider situations where Alice is unable or unwilling to properly assess her confidence in Bob - we have decided to impair what we claim to be her core competence. As long as she is incidentally matching his trustworthiness and his trustworthy behaviour, no harm is done. However, if Alice over-estimates her confidence in Bob, she may easily end up pursuing the risky choice, while if she under-estimates Bob, she may end up not pursuing the potential one. Either case does not seem particularly interesting, so that again it seems that for Alice the best option is to be confident enough - not too much and not too little.
Society does not benefit from them either. Even though certain elements of optimism [Marsh1994b] are beneficial, undeserved optimism leads to the reverse exploitation - Alice being abused by Bob. Undeserved pessimism, in turn, leads quite easily to paranoia and states of social exclusion where Alice intentionally limits her contact with other members of society - not a great benefit either. So, by impairing Alice's ability to assess her confidence, we made her either an easy prey or a social outcast. Neither prospect looks particularly attractive.
Finally, let's consider that society is interested in maximising the total level of confidence among people (specifically increasing the level of trust, e.g. [Fukuyama1996]), in the expectation that this will lead e.g. to lower barriers and general improvements in our relationships. Such increased level of confidence may be beneficial for the development of trade, growth of civic societies, etc. Society will be satisfied with the increase in the total level even if some members suffer in this process.
The vision seems to be valid, with one caveat: it assumes that all people become trustworthy at the same time, so that all people benefit from this. Unfortunately, the missing part of such a vision is how those people should become trustworthy in a synchronous way. Such a situation is simply not sustainable: lapses in judgement, differences in competences, natural errors and fluctuations make people continuously more or less trustworthy and make others trust them less or more. The mechanism of confidence is statistical and subjective.
Therefore, in the absence of a plausible explanation of how to make everybody equally and simultaneously trustworthy, we must assume the existence of people of different levels of trustworthiness. If this is the case, then again, Alice is better off if she is able to determine confidence that is due. If society supports trust, she may operate with high registers of trust, rather than with low registers, but her ability to determine the level of confidence is essential.
Actually, the total trustworthiness is not the prerequisite for market efficiency. It has been demonstrated for negotiated contracts [Braynov2002] that a market in which agents are trusted to the degree they deserve to be trusted (so that trust reflects trustworthiness) is as efficient as a market with complete trustworthiness. Complete trustworthiness is not a necessary condition for market efficiency, but the ability to properly assess other's trustworthiness apparently is. If Alice's trust matches Bob's trustworthiness, they are both individually better off - and society is better off as well. If she is unable to assess Bob correctly, results are socially unacceptable or may lead to failures.
As we can see, whether Alice is selfish or not, whether she wants to invest in the relationship or not, whether society is trusting or not, her core skill is the same: to be able to assess her confidence in Bob. Too much blind trust is damaging - the same as too little.
What if Bob is no longer a person? What if Bob is a computer - or what if Alice does not know whether Bob is a computer or not? Our confidence in computers (and in all other classes of devices) differs from our confidence in people. We have significant experience in dealing with people but we have very limited experience in dealing with devices. Our expectations are different: we do not expect the computer to act for our benefit, we want it (quite often in vain [Lacohee2006]) to do what it has been told to do. We trust them and we are confident about them in a way that is simultaneously similar and different to our relationship with people.
It is similar, because we are confident if we can trust or control. As devices are growing more complex, we tend to trust - mostly because we cannot control them [Dennett1989]. Similarly, we assess our trust and our confidence according to what we know about devices. Even though expectations and evidences are different, the mechanism seems to be similar.
It is different because this relationship is morally void. There are no ethical guidelines whether to trust the Internet or not. There is no moral ground in trusting a faulty computer in the expectation that it will reciprocate. Our relationship perhaps is equally emotional, but otherwise it is utilitarian and rational.
However, the core competence remains the same: to be able to assess whether we should be confident about the device. Pragmatic as it is, it encapsulates exactly what we always need.
Date added: 2023-09-23; views: 193;