The design challenge of pervasive computing (vision development for European Union research consortiumConvivio 2001-2003)

Doors was responsible for vision building during the first research cycle of Convivio – the European Union network for social computing; its early members included Xerox, King’s College London, Philips, Deutsche Forschunhzentrum fur Kuenstliche Intelligenz, Fraunhofer-Gesellschaft, Consorzio Roma Ricerche.

Posted in [no topic] | Leave a comment

The new medicis

Those were the days. This text, which was written for Japan’s Hakuhodo advertising agency, is a reflection on the changing nature of sponsorship. At the time (1990) I was convinced I had invented a killer business concept – ‘cultural engineering’. Unfortunately, when Japan’s bubble economy abruptly collapsed in 1992, so, too, did my concept: it turned out that the ‘cultural imperative’ lauded in my text was not an imperative after all – it was an easily dispensed-with luxury. Japanese companies cancelled all such activities (which included 70 per cent of my then company, Design Analysis) in a matter of weeks when the economy went bad.
1 The cultural imperative
Passive, hands-off patronage of the arts is a modern invention. And a short-lived one, if you believe the signs. Profound changes to the nature of modern business – some of them dating back 40 years, some unfolding within the last five – have created a ‘cultural imperative’ for advanced organisations. For them, culture can so dramatically enrich business performance that cultural policy is moving to the centre stage in discussions of strategy; it is no longer segregated from other marketing or communications tactics.
It is a dramatic change. After all, the notion that ‘culture’ should operate in a privileged, protected realm, free from interference by state or business, is deeply rooted in 20th century industrial culture. A few years ago, imagine the outrage if John D Rockerfeller had commanded Jean Dubuffet to paint ‘The Glorification of Standard Oil’: such things simply are not done by modern patrons who claim to be motivated by notions of disinterested civic duty and public service. All mention of marketing, or corporate identity, is rigorously excluded from the traditional scenario.
But things were not always so clear-cut as the American critic Joseph Alsop recalls: “During the 17th century Cardinal Barberini, whose uncle was Pope Urban VIII, commissioned Pietro de Cortona to paint the ceiling of his new palazzo – a space the size of an (American) football field – with a vast narrative glorifying…Pope Urban VIII! ‘Cortona did not flinch, nor hanker to paint something more relevant; instead, he cheerfully produced one of the most marvellous works of decorative art of the seventeenth century’. From the first Babylonian, Chinese, Greek and Roman civilisations, through to the Middle Ages and the industrial revolution, bankers, politicians and potentates have employed culture overtly as a weapon of policy: the arts were always involved intimately in the articulation and exercise of power.
It is in this context that the time has come to look critically at the contemporary myth of ‘passive’ patronage. We would argue that it is only with the modern concept of ‘artistic license’, and the growth of a market which derives value from the idea of artistic autonomy, that art entered its privileged realm, supposedly protected from the venal ambitions of the less enlightened patrons in earlier historical periods.
Various factors sustained the myth of passive patronage for much of the twentieth century: the decline of religion; the rise of the merchant classes; changes in the techniques and tools of artistic production; new patterns of consumption; the emergence of a museum culture; the rise of the curator, the dealer and the critic. Above all, the art marketing system has moved from the margins to the centre of society, and has successfully given artefacts the status of financial instruments, like banknotes, or gold bars. Critics have already noted that this superheated financial context questions the freedom of the artist: what price artistic autonomy when paintings sell for $40 million?
This much, many have observed already; the past 30 years have been punctuated by a number of important critiques of the ‘art system.’ Today, however, things are changing once again. Profound changes to the nature of competition in world markets, combined with the steady ‘dematerialisation’ of business, is changing, fundamentally, the relationship between patrons and artistic production. Cultural patronage has become a powerful weapon in competition between firms, cities, even states – and it is this business imperative, rather than the brickbats of critics, that, in our judgement, will destroy the myth of passive patronage.
Culture and commerce have been on convergent paths throughout the past 40 years, as a change in emphasis occured in the advanced economies – from production to information.
During the 1950s and 1960s, advertising emerged as a potent means of modifying consumer behaviour. With the growing sophistication of mass production systems, differences between products of functional performance diminished; even as cars, appliances, even washing powders became more technically sophisticated, the emphasis in marketing on their function decreased. In this sense, it was advertising that began the so-called ‘softening’ of the Western economies, as companies realised that managing perception, not just improved product performance, was the key to competitive success.
Then, during the 1970s, the development of marketing provided business with new tools – research and statistical skills by which it could analyse the composition and behaviour of consumer groups in ever greater detail, and with more subtlety. Marketing soon altered the parameters of discussions about business strategy: the convergence of information about the profile of consumer markets, with the growing flexibility and responsiveness of production, and the new tools of global communications, rapidly increased the ‘dematerialization’ of business. With marketing, information – in the form of data about consumers, or programmed production, or communications – became more important than matter.
After advertising and marketing, a third transformation occurred in business during the 1980s when design – which had, until then, been a specialised, technical activity, moved to centre stage as the new agent of perception management. In particular, one offshoot of old-style design, corporate identity, boomed: by 1990, it had become a $30 billion communications niche sector by itself. The explosion of interest in – and spending on – CI, reflected a move away from the marketing of individual products to a more global concept of ‘branding’ in which fashioning a company’s image became as important as fine-tuning its products, or its product advertising.
During the late 1980s, the process of corporate identity management became steadily more sophisticated. For many years, corporate identity remained essentially a matter of visual identity – and in particular a company’s logo or letterhead.
But steadily, our understanding of identity has broadened.
It is no longer enough to introduce a new visual identity – a complete combination of strategic, organisational and behavioural change is required. For example, management structures need to become less hierarchical and more ‘horizontal’ to improve the dissemination of know-how. ‘Identity’ also involves innovativeness, both within a company and between the company and outsiders. And ‘identity’ does not just influence a company’s staff – it is also a powerful element in the attraction of new recruits, an important issue as labour shortages increase. For all these reasons , changes in accounting procedures in Europe and the USA led to the formal valuation of brand identities: thanks to the tax inspectors, and M&A boom, intangible concepts like the name of Coca Cola, or Sony, are now ranked on a world scale. With this objectification of their value has followed, inevitably, a further increase in investment.
Just as the value of a company’s intangible assets, and in particular its corporate identity, has been reassessed, so, too, the tools and tactics available to manage and improve such intangible assets have multiplied. For two reasons – both technological. First, information technology has provided business with an ever more detailed and up-to-date picture about consumer behaviour – its the audience has become fragmented, but business has a clearer understanding of the fragments. With new database segmentation, for example, it is now possible to predict the performance of certain kinds of direct mail before it goes out. Secondly, the mediums of communication have become fragmented – with the result that mass communications, such as television advertising, are proving steadily less and less cost effective.
The phenomenon of ‘niche markets’ will be familiar to readers of this book, and does not require further detail here; but a note about the decline of mass communications is warranted. In the UK, for example, the total amount of television viewing declined by 6.5% between 1985-1990; among wealthier consumers, the decline was more pronounced – socio-economic AB consumers watched 9.3% less television during this period. In the words of The Independent, a London newspaper, ‘television is simply becoming more peripheral to more people’.
The irony is that many business strategists thought the spread of mass communications would create a new class of ‘global consumer’, and the 1980s concept of the ‘global product’ was a direct result of this expectation. Unfortunately for this theory cost savings from economies of scale in manufacturing global products, even where they have been achieved, tend to be offset by the increased costs of the specialised marketing communications that have to be employed in different markets
Now, the trend is towards what I have called elsewhere (*) ‘deep marketing’ – a complex, multi-faceted, constantly changing strategy in which a wide variety of communications tactics are combined with a growing integration of research, design management, production and marketing processes.
Deep marketing addresses not only the outside world of consumers, but also the inside world of a company’s own people. And the range of tactics to be used (see Table XX) grows longer by the day. Today, specialised marketing and communications are a $650 billion industry world-wide.
Deep marketing is a response to a shift from products to services which has transformed the nature of competition in the following way: firms now compete for the attention of consumers who are more visually literate, more sophisticated, more culturally aware, than at any time in history. In buying a car, a pension plan, a computer, or a box of muesli, consumers have come to assume that competing products will probably perform more or less equally well – where they discriminate is in the value added to a product, and a product’s firm, by intangible factors such as its image and style, service, or perceived sophistication.
But if the 1980s were dominated by the sophisticated use of visual imagery to enhance the brand identity of whole companies, the 1990s and beyond will witness a new industrial culture based on learning and creativity within, and between, companies and their customers – the ‘aestheticisation of business’.
As the business theorist Charles Hampden Turner explains, ‘we are all, now, in an economic race to learn. The wealth-creating capacities of a nation are no longer contained in their physical resources, nor even in their comparative economic advantages, but in the innovativeness and learning of their culture…the more knowledge that is organised into a product, the less the likelihood of competition’.
Hence the cultural imperative. Companies – and the argument holds just as well for cities, or indeed states – can no longer compete only with products, or with their image: they must find another way to express their intelligence, their individuality, their sophistication. And their new tactic? It is cultural engineering, as this following sections explain.
2: Cultural strategy
A great deal of confusion has been caused by the failure to distinguish between three quite different uses of the words ‘corporate’ and ‘culture’: ONE: among management theorists during the mid-1980s, ‘corporate culture’ became a fashionable term to describe aspects of a company’s socio-technical psychology; ‘corporate culture’ was a way of describing a whole company’s ‘state of mind’. Although an imprecise discussion at best, the content was important; the way in which managers related to each other, their ability to innovate, and the ability of a company’s structure to support innovation, was seen to be as important as technological or marketing prowess in the battle for competitive advantage;
TWO: a second use of the word culture referred to sponsorship of arts events; as we described earlier, the concept of business funding artistic enterprise dates back centuries, but in the USA, in particular, the notion of strategic philanthropy was strong during the 1970s and 1980s. The traditional disinterested patron of grand cultural events still existed, with the great foundations – Carnegie, Mellon, Ford, Rockefeller, Getty – continuing to dominate the arts funding scene; but they were joined by hundreds of other corporate patrons, many of them operating at a local level, who helped expand the ‘cultural economy’ dramatically: in the USA, corporate donations to the arts broke the $1 billion a year mark in 1988. In recent years, the more savvy players (IBM, Mobil) have repackaged their philanthropy and patronage as social policy, creating quite detailed criteria for the distribution of funds not just to artists, but also to community groups, educational bodies and health. Strategic philanthropy produced a new breed of sponsorship consultant who matchmakes the Big Culture producers (opera, theatre, art shows) and corporate or private sponsors. Various techniques are employed to make sure the sponsor gets his or her money’s worth – from the selection of appropriate events to minutiae of the opening night party. THREE: Now, a third interpretation of ‘culture’ and corporate strategy is emerging in which the internal ‘state of mind’ of the company is perceived to have entered a new, synergistic relationship with the ‘outside’ world of consumers, technological change and exploding communications. A new industrial culture of continuous innovation has been identified in which cultural projects are transformed into a medium for communication between the company and the external environment. In other words, because the concept of a learning organisation, described by Charles Hampden Turner, entails a constant relationship between staff, consumers, consultants, scientists, and so on, a new communications medium is needed to support that relationship. The medium is culture in both senses – internal behaviour, and external cultural event – but the combination is entirely new. Managing the creation of this new medium is a process we call cultural engineering.
In most business literature, the parameters of cultural engineering are drawn rather tightly around sponsorship of traditional categories of ‘culture’ – fine art, opera, theatre, music, and so on. Even within these traditional categories, spending on cultural projects has exploded. In Europe, for example, it is estimated that arts sponsorship in the UK, France, Germany, Belgium and the Netherlands has reached $400 million; this level of growth is attributed to the growth of arts sponsorship associations, and the introduction of fiscal incentives by national governments. But these figures tell only part of the story; the emphasis on financial contributions, donations in kind, and specialist advice, varies from country to country, with UK businesses giving mainly money, and West German companies concentrating on goods and advice. Given that professional sponsors usually spend up to three times as much on marketing support, as they spend on the art event itself, then the total sponsorship economy in these five European countries alone is probably nearer $2 billion.
Although reliable world-wide figures are not available, the world-wide sponsorship economy – the sum of cash grants, help in kind, and back-up marketing budgets – is probably $5-8 billion. Add in capital grants to infrastructure projects, such as museums, art galleries and theatres – many of which receive free land, or low-rent premises in otherwise commercial developments, and the figure rises to nearer $20 billion.
New categories of cultural engineering
Restricting the category of ‘culture’ to traditional art events – even when it produces a $5-8 billion niche marketing activity – grossly underestimates the real size of the investment by business in culturally-related programmes. For, if one accepts the lessons of the corporate identity movement that all a company’s activities contribute to its image – from the state of the office interior, to the quality of its advertising – then the size of the cultural economy explodes.
Consider the advertising industry. According to the British marketing services conglomerate WPP, the world-wide fee income for the advertising sector is over $100bn – and according to many cultural critics of the ‘post-modern world’, advertising and mass communications have become so pervasive that they must be judged in part at least as cultural activity. In the UK, for example, more than 75% of art school graduates go on to work in advertising or the media. The most effective advertising not only exploits existing cultural references in its contents (pop songs, artists, famous designers, fashion concepts, are all regular subject matter for advertising).The best advertising creates new cultural forms of its own: one thinks of computer graphics wherein artists have created a whole new experimental aesthetic in the course of their work in advertising.
Of course, the theory that business is becoming more ‘aesthetic’, in the broadest sense, does not mean that these billions of dollars are perceived by the corporate business people who spend them as cultural expenditure. On the contrary, the great majority of companies still make a big effort to segregate sponsorship from other marketing activities – and in the whole world, there are probably no more than 30-40 companies that consciously integrate all these aspects of their business into a unified strategy. But this is not the point. In our view, these 30-40 companies are the most advanced in the world, and in many respects are useful models for the future.
Interestingly, the concept of a new industrial culture, in which producers, consumers and experts are united by a continuos process of innovation, is understood by meta-industrial organisations rather better than ordinary companies. One thinks, for example, of city governments which, in recent years, have found themselves forced into intense rivalry and competition with other cities around the world. Some of the ways in which these non-traditional cultural activities are managed are introduced in the following sections.
The Cultured Company
In this brief survey, we described a progression from advertising (1960s) , through marketing (1970s), to Corporate Identity (1980s) and ‘Deep Marketing’ (1990s); in Deep Marketing, wherein companies employ a constantly changing mixture of communication techniques, Cultural Engineering plays a central role as the medium for communication between the company and its external environment. We also explained the important way in which this definition of Cultural Engineering combines the two earlier uses of the words ‘corporate’ and ‘culture’: 1] corporate culture as ‘corporate state-of-mind’ or ‘the way we do things around here’; 2] the hands-off, philanthropic of arts events by corporations. By combining the two ideas in Cultural Engineering, we proposed that a company’s involvement in external cultural activities would, in itself, change the company’s internal culture.
Despite our argument that ‘hands-off’ sponsorship is in decline, this model remains highly influential, particularly in Japan and in Europe; (in the USA, there are some indications that the recession is causing some big sponsors to question the value of these activities). But in Japan, in particular, the concept of disinterested philanthropy is strongly reinforced by a tradition of civic duty. Long before arts sponsorship was discovered in Japan, the owners and leaders of companies felt a collective responsibility to repay to the community some of the profits gained in their business lives – a concept more-or-less completely absent from most Western industries, in the 20th century at least.
Our argument is not that civic duty or social responsibility is wrong – but that, when applied to sponsorship of the arts, it cannot be ‘hands-off’. The sheer scale and importance of corporate funding for the arts will influence culture, whether the donors wish it or not. In the 1990s, corporate leaders will have cultural responsibility forced upon them; they cannot escape it.
Some corporations solve this problem by refusing ever to donate large sums to a single cultural activity: in Britain, for example, companies like Shell, or the Midland Bank, donate large sums each year – but in small quantities to large numbers of recipients. This process requires considerable in-house management: selecting from among many thousands of applications each year takes a lot of expert work.
In the USA, corporations have solved the ‘responsibility problem’ by devolving the sponsorship to local branches, which often choose to concentrate on social or educational activities, such as schools, parks, and public amenities. This policy can be highly effective in helping a multi-national company integrate itself into local communities (and markets). This policy of distributed sponsorship’, with a strong social emphasis, will certainly appeal to companies for whom the Cultural Engineering concept is unattractive.
But for many sophisticated, knowledge-based enterprises, the use of culture to create an innovative, creative and learning organism – the company of the future – offers tremendous opportunities. The question for them is this: how to manage the transition?
One must distinguish, here, between the American concept of Strategic Philanthropy – or the slightly different French concept of mecenat – and cultural engineering in the sense we have described it.
In the former situations, external cultural projects are selected on the basis that they meet clearly defined objectives set by the sponsoring company. So, a company may wish to develop its image as an intelligent company, in which case the choice or selection of event is crucial. So is the mechanism by which the identity of the sponsor is conveyed. In sports sponsorship, and increasingly in arts sponsorship, sponsors typically devote 300% more than the cash subsidy to promotion of their own role as sponsors. Increasingly, the French mecenat model emphasises the development of in-house expertise in cultural management, so that companies need not just react passively to proposals from outside producers, but may proactively develop projects, jointly with museums or theatres. Shiseido’s new Culture Division offers an advanced model of this kind, with a Senior Manager reporting directly to the President. But in Europe, the opposite phenomenon may also be observed – the management of arts sponsorship drifting down the management hierarchy, away from the President and towards the Brand Managers who are better placed to target arts events at specific consumer groups. This phenomenon is paralleled by a decrease in the influence of the CEO on the choice of events: the President’s personal preference for opera is giving way to the Brand Manager’s more intimate knowledge of what turns on his customers.!
But there are two new challenges for top management: first, how to integrate external cultural events with the internal development of its own people – using external sponsorship to change inerenal ‘state-of-mind’; and second, how to extend the management of culture from traditional arts events to the various components of Deep Marketing: architecture, design, advertising, corporate communications, training, and so on.
There are no simple answers to these two questions. Managing cultural policy is like managing change (another business buzzword of the 1990s): a complex process operating at different levels and changing through time. That said, certain components of a Cultural Engineering strategy may be listed:
A] Involve all levels of management: For an organisation to change its state-of-mind, it is not enough simply for the CEO or President to issue edicts: junior and middle managers need to be involved in commissioning, organising, and exploiting cultural projects – in collaboration with artists and cultural producers. This is not to say that managers should interfere with the artist’s independence – but he should be intelligently involved as a partner in the creative process.
B] Develop in-house expertise: It follows that companies should not rely on sponsorship consultants to tell them what to do; consultants have a crucial role to play as ‘brokers’ or contact-points – but they should be used as support services for work rooted inside the organisation. Cultural training will be needed to help managers; at the moment, such training is almost non-existent (*) – companies will need to work with museums, universities and art schools to create the training systems needed.
C] Integrate traditional culture with other mediums: many of the managers who will be required to get involved in cultural work (art exhibitions, concerts, theatre and so on) will also be involved in design management, advertising campaigns, research policy, corporate communications, and so on. The whole essence of a ‘learning company’ is that these different subjects are connected. Practically, this means a company’s management structure must promote interaction: * when a building is commissioned, the most culturally advanced managers should be involved in briefing the architect; * when an advertising campaign is developed, the company’s involvement in an art exhibition should be included as part of the research phase * in new product development, designers would benefit from exposure to craftsmen, sculptors, or other artists; * when a company is developing training policies, it should include contact with cultural institutions, as well as universities and management schools, in the mix * art and technology can interact with each other in surprising and profitable ways – but such interactions need to be organised, and not left to chance, or to the initiative of artists. Research managers, in particular, will need to begin using artists as a new breed of researcher D] Treat cultural programmes as an investment, not an optional extra: The concept of ‘1% for art’, in which companies promise to spend 1% of their gross profits on arts sponsorship – is good for artists and cultural producers – but it is not, in our opinion, a valid policy for business. The reason: if cultural engineering is to be taken seriously, some companies may need to spend more than 1% – and others might validly spend less. The problem with ‘1% for art’ policies is such budgets are separated from central research, training or marketing programmes: ‘1% budgets’, by their nature, tend not to be monitored, managed or evaluated. Cultural budgets should be included with research, marketing, training, or corporate identity, and not segregated as a special case. Cultural programmes create new knowledge, and new values – therefore they should be treated as an important investment.
E] Cultural engineering is only one of the answers: To repeat: in future, companies will develop Deep Marketing expertise, in which many skills and tactics will be used simultaneously. Cultural Engineering is a key element, but it is not the only one.
F] Every company’s culture policy will be different. In today’s most advanced companies, there is already considerable integration between Cultural Engineering, Design Management, Corporate Identity and so on. For companies such as Seibu Department Stores (retailing) Olivetti (information products) or Armani (fashion and merchandising), these different functions are already deeply ‘cultural’. These companies possess a culturally advanced ‘state of mind’; they are the prototypes of ‘the cultured company’. But each one is also different: none of these companies has adopted an abstract model of cultural policy, but has exploited its history, and its existing skills, as well as new external opportunities, such as changing consumer tastes, new technologies, and so on.
end

Posted in art & perception | Leave a comment

Design and Innovation Research Centre (DIEC) (Planning a new design and innovation centre. Newcastle upon Tyne. 2001)

diec.png
Doors of Perception was in a consortium that developed the initial specification and blueprint of an important new institution to be based in Newcastle-upon-Tyne in England. Our client was the UK regional development agency, One North East. Its title, at the time of our contribution, was Design and Innovation Research Centre. The project is now known as The Northern Design Centre – and here is a pic of its proposed new building:
diec.NorthernDesignCentre-NCL.png

Posted in [no topic] | Leave a comment

Rules of engagement between design and new technology

These principles were formulated for my keynote at the Computer Human Interaction (CHI) conference, The Hague, 2000:

1] We cherish the fact that people are innately curious, playful, and creative. This is one reason technology is not going to go away: it’s too much fun.

2] We will deliver value to people – not deliver people to systems. We will give priority to human agency, and will not treat humans as a ‘factor’ in some bigger picture.

3] We will not presume to design your experiences for you – but we will do so with you, if asked.

4] We do not believe in ‘idiot-proof’ technology – because we are not idiots, and neither are you. We will use language with care, and will look for less patronising words than ‘user’ and ‘consumer’.

5] We will focus on services, not on things. We will not flood the world with pointless devices.

6] We believe that ‘content’ is something you do – not something you are given.

7] We will consider material end energy flows in all the systems we design. We will think about the consequences of technology before we act, not after.

8] We will not pretend things are simple, when they are complex. We value the fact that by acting inside a system, you will probably improve it.

9] We believe that place matters, and we will look after it.

10] We believe that speed and time matter, too – but that sometimes you need more, and sometimes you need less. We will not fill up all time with content.

Posted in most read, rules & unsolicited advice | Leave a comment

New geographies of learning

How technology is altering the terrain of teaching. I rashly agreed to give a lecture to several hundred university teachers in Amsterdam….(This is the text of a speech given on September 6th, 2000, at the Hogeschool van Amsterdam).

I am most grateful – and not a little intimidated – by your invitation to give this talk to you today. I say intimidated because I am an outsider speaking to a room full of experts. At a rough guess, I’d say that you probably have about 5,000 years of educational experience between you in this room! If we add in your time as childhood learners, then your aggregate experience doubles to probably 10,000 years!

Now this may surprise you, but I am not here promote the internet as the answer to every educational question we face. On the contrary. I am sceptical about the claims being made for web-based learning. Most of it strikes me as ‘old wine in new bottles’. The potential of the internet is not understood – let alone exploited – by much of the ‘virtual’, ‘distant’ or ‘online’ education that’s out there now.

But I am not a technophobe. I do not criticise today’s e-learning products because they use the internet, but because they don’t use it enough. The internet contains amazing examples of what I call ‘net effects’ that can enhance learning in spectacular ways. But these net effects are being developed in different contexts, and for different activities, than for education and learning.

The main point of my story today is this: we should use these ‘net effect’ tools for learning, whether or not they were intended for that. My talk today has three parts. First, I will explain why I don’t much like or trust most of the internet-based learning that’s on offer now. Secondly, I’ll show you some of the ‘net effects’ that we should hijack for our own purposes. In the final part of my talk, I’ll tell you about a forthcoming event – OroOro: teacherslab – which has been designed to help us take this kind of initiative.

My critique of today’s e-learning is this: it focusses on just one aspect of the learning process – the delivery of text or media from one place to another. This scenario is often accompanied by fantasy images of privileged individuals surrounded by all the world’s knowledge. ‘Streaming learning’ for the hi-tech elite.

There are two problems with this picture. First, it is technically not yet feasible. The tools and infrastructure for multi-channel broadband communications on a large scale are simply not there yet. The second much bigger problem: any service that restricts itself to the delivery of pre-packaged content, ignores the social and collaborative nature of learning, and cultural qualities of time and place that add depth and texture to the process. I call these key ingredients the geographies of learning.

I am sure nobody here would seriously aspire to replace schools and universities with websites or cable channels. But there are powerful interests out there who do. A couple of years ago, a former Dutch economic affairs minister told me that, with the internet, “we can stream lectures from the best ten per cent of teachers to classrooms, and do without the other 90 per cent “. I also visited Japan with a European delegation; there, we were proudly shown vast halls filled with hundreds of personal omputers – facing forward to the teacher, in neat rows. This, we were told, was a school of the future. To use the language of my childhood comic Dandy: “Yikes!”

Fantasies of a technological fix for education are highly attractive to some politicians. Faced with large-scale skill shortages, they are receptive to scenarios that ‘penetrate the schools’ with new technology and thus, as if by magic, multiply the production of well-trained students. This rosy vision is clouded only by the possibility that grown-ups might stand in the way; I have read in several policy documents that ‘teachers are the main impediment’ to technological modernisation. Some developers are just as bad, boasting of their ‘teacher-proof technology’.

As Gore Vidal once said, even paranoids have enemies. So if any of you have been suspicious about the motives of people promoting new technology in education – you were right! At least in part. Such visions of a vast, semi-automated learning machine remind me of the joke about the factory of the future: It will have only two employees, a man and a dog. The man will be there to feed the dog – the dog will be there to stop the man touching the equipment. Technology push is not a new feature of the learning world.

Throughout the 100 year history of distance education, which began with the correspondence course, ambitious claims have been made for the capacity of technology to improve the way we learn. First there was radio, then television, then video – a whole series of ‘Next Big Things’ even before the internet came along. None of those earlier technologies lived up to the claims made for them. Neither will the internet – unless we change the things we are asking it to do.

We need to be vigilant, creative and proactive – right now – because technology push is intensifying. The internet is only one aspect of this. My picture of a cityscape (borrowed from Autodesk) neatly suggests that almost everything man-made, and quite a lot made by nature, will soon combine hardware and software.

So-called pervasive computing spreads intelligence and connnectivity to everything around us: ships, aircraft, cars, bridges, tunnels, machines, refrigerators, door handles, walls, lighting fixtures, shoes, hats, packaging. You name it and somone, rather soon, will put a chip in it. The world is already filled with between eight and thirty computer chips for every man, woman and child on the planet. (The number depends on who you ask). Within a few years – say, the amount of time a child who is four years old today will spend in junior school – that number will rise to thousands of chips per person. A majority of these chips will communicate with networks. Many will sense their environment in rudimentary but effective ways.
(PICT)003 “here’s looking at you, too”

The way things are going, as the science fiction writer Bruce Sterling so memorably put it, “you will look at the garden, and the garden will look at you”. Mind you: what the sunflower will see may not be very interesting. By 2005, nearly 100 million Europeans will be using wireless data services connected to the internet. But so far, only one service seems to have caught our imagination: paying the parking metre via mobile phone. I do not imagine our sunflower will be impressed by that!

Technology push confronts us with an innovation dilemma. It is simply stated: our industries know how to make amazing things, technically. That’s the top line in my chart: it heads heading manfully upwards. The line could just as easily apply to the sale of mobile devices, internet traffic, processor speeds, websites, or e-commerce. That blue line is a combination of Moore’s Law (which states that processor speeds double and costs halve every 18 months or so) and Metcalfe’s Law (which states that the value of a network rises in proportion to the number of people attached to it). But a new law – I have modestly named it Thackara’s law – is that if you put smart technology into a pointless product, the result will be a stupid product.

We’ve created an industrial system that is brilliant on means, but pretty hopeless when it comes to ends. The result is a divergence – which you see on my chart – between technological intensification – the high-tech blue line heading upwards -and perceived value, the green line heading downwards.The spheroid blob in the middle is us: we are hovering uneasily between our undiminshed infatuation with technology, on the one hand, and our unease about its actual value, and possible rebound effects, on the other.

Much of today’s e-learning reflects this tension between what we can do, and what we ought to do. Much of it is an answer to the wrong question. The wrong question is: “hey, we have a new communication medium called the internet! what shall we do with it?”. The right questions are these: “what is it about the learning process that needs to be improved? in what ways might the internet enable those improvements?”.
Our dilemma is that, although the internet and new media technologies can do some amazing things, they cannot support the soft and ‘wet’ aspects of learning that I believe we cannot do without. Besides, even if the technology could cope, no business model has emerged to pay for these more complex forms of learning.

Right now, these questions are not heard amid fevered talk of an ’emerging electronic university’, a ‘unified global marketplace for ideas’ and ‘worldwide web-based knowledge exchanges’. This kind of rhetoric has started a feeding frenzy among investors. One example: the world’s first trade fair for education – World Education Market – took place in Vancouver in 2000. Thousands of new players were attracted to an event which promised that education would soon be a $90 billion business – one of the biggest in the world, along with financial services and health.

E-learning entrepreneurs calculate that, in a knowledge-driven society, investors will place a higher value on people, than on plants and equipment. Proponents of the ‘intellectual capital’ concept assert that 70 per cent of a nation’s wealth today is in the form of human capital, rather than physical capital. Whether or not this theory is true, hardly matters: the markets perceive it to be true – and have decided it is worth a bet. To be fair, the high priest of the intellectual capital movement, Tom Stewart, says repeatedly in his book that “smart individuals do not add up to a smart enterprise: for that, you need knowledge to flow. Sharing and transporting knowledge are what counts”.

In the breathless words of one new education ‘portal’, UNext, “the vast imbalance between the supply and demand for quality education provides an enormous, untapped global market. Countries, companies, and individuals that don’t invest in knowledge are destined to fall behind”. The internet, gushes Unext, “has created an unprecedented opportunity to create a global education business”.

Some new projects are so-called ‘pure play’ initiatives – start-up companies which aspire to be a ‘learning portal’ through which all types of knowledge and learning will be exchanged. One such, Hungry Minds (hungryminds.com) is – I quote – “continually combing the net to feed our growing database of 37,000 online courses”. The fact that Hungry Minds is only a couple of years old may explain its preference for quantity over quality.
Another pure-play site, Corpedia.com, has enlisted the world’s leading management guru, Peter Drucker, to make a series of five hour-long management programmes – “leading business strategists delivered direct to your computer”. Corpedia’s demo includes a wonderful demonstration in which a fictitious (I hope) employee types onto an electronic schoolbook’, “I resolve to become a batter employee”. The power of computing and ‘learning process re-engineering’ is wondrous to behold.

Another new entrant, Fathom (fathom.com), has spent many millions of dollars building an alpha version of its portal, which has not even been launched as I speak. Fathom has partnered with an impressive roster of blue-chip universities and institutions, including the London School of Economics, Columbia and Chicago Universities, Rand Corporation, and New York Public Library. The Nobel Prize-winning professors and heads of state who studied at these august institutions take pride of place on the home pages of sites like UNext and Fathom.

Other institutions are going it alone. Harvard Business School has invested millions of dollars a year in its website since the mid-1990s; the site features sophisticated interactive software that adds zest to the tonnage of business case studies. Penn State University has thrown all modesty to the winds with its so-called “World Campus”. And at the Wharton Business School, its private-sector neighbour, you can spend $50,000 on a four month e-business course. There are more than 1,600 accredited distant learning courses, many of them web-enabled, in California alone. So there’s a lot happening out there, and big money is flying around.

But I suspect some of these projects miss the point. In many of these ventures learning is understood – if it is understood at all – as a one-way, ‘point-to-mass’ distribution system. My line is this: even if there are ten Nobel Prize-wimnning professors sitting at that ‘point’, delivering content down a pipe, like water, this is not teaching. And ‘receiving’ content – like an empty bucket under a tap – is not learning. Put another way: I’d be very surprised indeed if these Nobel Prize-winning eminences would have made such a big contribution of they’d done all their teaching and research on net.
The English writer Charles Hapmden Turner has put it better than I can: “knowledge is becoming too complex to be carried in the individual heads of itinerant experts. Knowledge as it grows and grows is necessarily social, the shared property of extended groups and networks”.
(PICT)009 geographies of learning

The ‘distribute-then-learn’ model cannot embrace the more complex geographies of learning that I mentioned earlier. I like the way David Hargraves put it in a Demos pamphlet: “schools” he wrote, “are still modelled on a curious mixture of the factory, the asylum, and the prison”. Unless we think about learning as a process that depends on place, time, and context – the internet will not enhance learning. It will probably make it worse. I will briefly take you through these ‘geographical’ qualities to explain what I mean.

Learning is social, learning is asynchronous, learning is local, learning is organisational, learning is sharing, learning is searching, learning is play, learning is social.

An important new book, The Social Life of Information, by Paul Duguid and John Seely Brown, reminds us that we learn not only by the acquisition of facts and rules, but also through participation in collaborative human activities. The most valuable learning takes place among social networks, not at the end of a pipe filled with pre-packaged ‘content’. (The fact that one author of this book, Seely Brown, is Chief Scientist of Xerox, suggests that big companies companies may be changing the way they innovate away from a technology-led approach).

Learning is asynchronous. New technology has worked best when helping people interact across time, rather than across space. When students and teachers can access web documents at different times, they can escape the temporal confines of the classroom, say experts like John Seely Brown. The best of such internet tools are usually an extension of – not a replacement for – face to face exchanges.

The concept of a ‘death of distance’ made great headlines a couple of years ago. Its grandchild is the concept of ‘anytime, anywhere learning’.The idea sounds attractive and uncontroversial. But when based on a point-to-mass distribution model, it overlooks the significance of place and local knowledge. A lot of what we learn is remarkably local: History. Agriculture. Politics. Art. Geology. Viticulture. Forestry. Conservation. And local does not just mean local nature. The city of Paris, (shown here on a photograph) is also replete with ‘local’ knowledge. Cities are unique learning ecologies. The danger we face is a combination of ‘death of distance’ideology and the sheer pressure of money and technology behind ‘global’ e-learning scenarios that could marginalise local forms of knowledge, regardless of their importance.

A lot of learning takes place in offices, research labs, hospitals, design offices, web studios – anywhere, indeed, that people gather together to work. The way we organise education – or for that matter work – hinders integration between the two communities. The Internet makes it easier to connect parts together in a technical sense – but breaking down the walls between ‘school’ and ‘work’ and ‘home’ will involve cultural and institutional connections that will be harder to achieve.

The prominence given to the presence of Nobel Laureates in the rhetoric of portal sites like UNext and Fathom suggests that they are wedded to a Great Minds theory of learning. But teacher-to-student education is only one side of the story. Student-to-student learning (or peer-to-peer learning outside formal educational contexts) is just as important. And let us not forget student-to-elder teaching! At a time of rapid technical change, so-called ‘upward mentoring’ is coming into play because ‘students’ often have a fresher understanding of specialised technical domains. The founder of MediaLab, Nicholas Negroponte, tells a great story about upward mentoring. “I never used to understand why people had difficulty with their video recorder remote control ” he says. “Until, that is, my own remote control – my son – went off to college. From that moment on, I’ve been unable to use my video at all.”
(PICT)015 learning is networking

The concept of local knowledge ecologies summons up the image of education as a kind of mythical journey. A student would no longer expect her or his university career to take place in a particular place, for a pre-set period, among a pre-selected body of academics. Instead, tomorrow’s student will travel, Chaucer-like, among a a network principally of his or her own making – staying at home, travelling, mixing online and off-line education, work in classes, or alone, or with mentors – and above all continuing the journey long after talking a degree.

It takes a lifetime to become the child that you should be, said Jean-Luc Godard. But vast projects to wire up classrooms to the Internet seem to be going in the opposite direction. Rather than make space for children and teachers to learn in new and playful ways, most ‘wired classrooms’ are more like cages filled with experimental rats. Only the rats are our children. But in The Netherlands, origin of Huizinga’s Homo Ludens, we should know better. We learn by playing and by doing – not by being filled up with knowledge like a bucket. Or a hungry rat. We need playmates, too.

In the first part of my talk, I complained that a lot of Internet-based education is based on an industrial, ‘distribute-then-learn’ model at the expense of other qualities which are just as important – social, local, organisational, sharing, networking, and play. But I do not blame the Internet for e-learning’s lack of ambition! On the contrary: away from “Learning” with a capital “L”, astonishing new tools and environments are being developed. They are called “customer service technologies” or “application services.” These are the focus of part two of my talk.
(PICT)018 customer service applications

Many of the buzzwords used to label these new tools will mean nothing to you. Many of them mean nothing to me, and I’m supposed to be an expert:
[Words on screen] Search Engines, Wizzards, Filters, Bots, and Agents. File Sharing, File Transfer, Intelligent Routing. Auctions and Clearing Houses. Portals, Vertical Nets, and Vortals. Games Opinion Sites; Feedback, Rating, Comparison and Recommender Systems. Groupware, Community Ware, List Servers, Moderation Support Tools. Live Voice, Real Person, Chat Spaces. Keywords: Peer-To-Peer and Open Source.

But before we explore what these words mean, I’d like to draw your attention to the amazing speed and scale of the innovation taking place. These technical innovations receive far less attention than e-commerce – partly because it is often hard to grasp what these applications are for. But, albeit obscurely, dozens of new applications emerge every month. For a flavour of this strange new world, read new economy magazines like Red Herring, Business 2.0, Fast Company, or Industry Standard. These new economy bibles are filled with advertisments for these obscure new applications. As a sample, only, of what I mean, allow me now to show you just four examples – from the hundreds out there – of applications that I think we can use in learning.

The first is file sharing. The subject has been in the news as the conflict between Napster and the global music industry (plus a ton of lawyers). What happens is this. People who want to obtain an music file (which may have been copied from a CD and compressed into something called an ‘MP3’ file) can do it in one of two ways. They can download it directly from a server linked to the worldwide web. Or they can use a file-sharing service like Napster to grab the track directly from another user’s computer. Listeners running Napster software use it to request a song; the programme searches the hard drives of all other Napster users who are online, and generates a list of the hard drives where the song can be found. Listeners can then download the file directly from the selected location. Sometimes this transaction can involve email exchange between the two users.

Online music sharing services like Napster provide access to millions of song files. Access, that is, to anyone with a computer, a sound card, and an Internet connection. 20 million users and rising fast. Whether or not Napster survives the legal onslaught of the music industry, other file-sharing platforms like Gnutella or Freenet are emerging too. Due to the de-centralised way they work, litigation is difficult, if not impossible. They do not use central servers that can be shut down: there is no ‘there’ there. Besides, these free programmes are developed by a loose coalition of young software developers who are guided by a strong sharing concept known as the Open Source movement. Open Source adds cultural energy and legitimacy to what is already a super-smart technological onslaught on centralised knowledge distribution.

The next ‘net effect’ I find intriguing is so-called live person technology. You might consider it an irony that contact between real people should be trumpeted as an innovation. After all, we enjoyed unlimited personal contact before the communications revolution that began with the telephone in 1876! Tant pis: retro-fitting real people into websites is now a big trend. Channels such as CNN and BBC Online are steadily expanding services that allow viewers to interact. Live contact is a bigger priority in the business world. A lot of effort is going into customer service technologies that help companies interact with their customers in real-time with varying degrees of directness.

Such systems as LivePerson allow companies to build a shared knowledge base for ‘pre-formatted responses’. (PFRs they are called in the trade). The aim is to provide at least the perception of so-called ’24/7 Customer Assistance’ – ultimately increasing the sites’ “stickiness” and value. A live person scenario for learning is not hard to imagine. Teachers all over the world complain when students ask them the ‘same old questions’ over and over again. By putting answers to students’ old chestnuts into a database, teachers could free up their time for direct input about new and original points of discussion.

Teachers may have more mixed views about my next net effect: application, opinion sites and ‘recommender systems’. A well-known example is epinions.com. One million reviews have been posted on epinions since its launch a year ago – about 4,000 a day. 10,000 of the reviews posted have been reviews of the site itself – a rich source of feedback for the company’s designers and managers. Such environments can dramatically increases a buyer’s spectrum of available, high-quality and efficient suppliers.

Another buzzword – ‘Supplier Performance Ratings’ – refers to other tools that help one buy services in new ways. Open Ratings’ (openratings.com) services includes the display of real-time ratings during the decision-making process. After a transaction, Open Ratings collects ‘satisfaction surveys’ from the buyer and supplier, then crunches the data in a way that weeds out fraudulent and retaliatory feedback. Participants can track their ‘reputation performance’ online. When it comes to education, caveat emptor indeed!

Finally, the net effect called games. I said earlier that play is sorely absent from most learning sites. The good news is, that away from the earnest attentions of learning entrepreneurs, children (and adults) are playing online like crazy. They spend hours on computer games which demand extraordinary feats of skill, intelligence, prediction, and motor co-ordination. All of these are aspects of high-quality learning, too. Sales of games software in America hit $3.3 billion last year, accounting for 15 per cent of all software sales. The Japanese spent $9 billion on games in 1998!
Many parents – and possibly spouses, too – worry about the shoot-and-slash storylines of games, nervous that that their loved ones’ minds are being turned to mush. Seasoned experts are more optimistic, and believe that children are learning to learn in new ways. According to Douglas Rushkoff, author of Children of Chaos, the youth of today have mutated into “screenagers”. The television remote-control, the videogame joystick, and the computer mouse, have irrevocably changed young people’s relationship to media. In any case, the worlds of game-obsessed children, and of sophisticated business, have started to overlap.

Gaming theory in general, and visual simulation in particular, are hot topics now in business. Banks, oil companies, city planners and environmental agencies, are all using game techniques to enrich their understanding of future scenarios. So, if someone in your family appears to be zapping monsters, do not despair: the skills they are acquiring can also help them explore scenarios about the future of ecosystems.

ORO ORO TEACHERSLAB

Let me recap on my story so far. In part one, I argued that delivering content down a pipe is not teaching. New models of learning are needed that connect people to people – not people to machines. In part two, I showed you examples of ‘net effects’ that involve sharing, live contact, opinion giving and rating, and play. I suggested that this kind of application – and many more on the way – should be plumbed into the learning process. Now, whether we want to change or not, technology will come. Entrepreneurs will continue to innovate. Student values will carry on evolving, and their media behaviours will continue to perplex us. But we have choice: to be be passengers, as they drive a transformation of the ways we teach and the ways we learn. Or we can join them in the drivers seat – and wrest the wheel from interlopers who don’t know how to drive.

Will we be the innovators in leveraging the value of what – and who – we know? Oro Oro has been organised for those of us who – yes – want to take the initiative. This unique three-day experience – part symposium, part hands-on workshop – is about new ways to teach, and to learn, in practice. The philosophy behind it is that you do not have to be under 25, and you do not have to be a nerd, to succeed in these hybrid learning situations. The objective is this: by the end of OroOro, every participant will be online, and on the net. We will acquainted with new concepts, skills and tools for the future. And we will have sampled learning interactions on the web that have their own rules, rhythms and speed.

The focus of Oro Oro is not just about technology, or online channels and tools. Its focus is on people, and on new ways to organise relationships between what – and who – we know. We are being asked to think about teaching and learning as a market. But what kind of market is it? An ‘agora’ in which everyone sells knowledge – and time – to everyone else? What are the different ways to be paid for what you know? The answer is that nobody knows. But if we do not like the answers being given now, it’s up to us to propose alternatives. Thankyou for your attention. And I’ll see you again at Oro Oro in January.

Posted in learning & design, most read | Leave a comment

Quality Time at High Speed (Service innovation workshop, Breda, The Netherlands, 2000)

What would it mean to design for fast and slow speeds?
breda.quality-time.jpg
The High Speed Network Platform, an association of 15 European regions, and Urban Unlimited, a planning firm, asked Doors of Perception to organise a cultural expert workshop on the theme, quality time: design for multiple speeds.
Today’s high speed train (HST) travel is a marvel of speed and profligate resource consumption. It is transforming the experience of space and time of 13 million travellers who already use it each year – and of citizens who live in places where the trains deign to stop. Enormous infrastructure projects are under way, but we have not made space for reflection on the cultural consequences of it all.
To fill this gap, the expert workshop developed project ideas for services and situations that connect people, cultural resources, and places, in new combinations.
bredabikeride.jpg

Posted in [no topic] | Leave a comment

The design challenge of pervasive computing

This is the complete version of my keynote lecture that opened the Computer Human Interaction (CHI) Congress in The Hague in 2000.

What happens to society when there are hundreds of microchips for every man, woman and child on the planet? What cultural consequences follow when every object around us is ‘smart’, and connected? And what happens psychologically when you step into the garden to look at the flowers – and the flowers look at you?

My talk this morning has four parts.

First, I will talk about where we are headed – right now. I’ll focus on the interaction between pervasive computing, on the one hand, and our social and cultural responses to technology, and increased complexity, on the other.
The second part of my talk is about what I call our innovation dilemma. We know how to do amazing things, and we’re filling the world up with amazing systems and devices. But we cannot answer the question: what is this stuff for? what value does it add to our lives?
The third part of my talk is about the new concept of experience design – and why it is moving centre stage as a success factor in the new economy. Experience design presents designers and usability specialists with a unique opportunity; but I will outline a number of obstacles we need to overcome if we are to exploit it.
I conclude with a proposed agenda for change, which I package as, ‘Articles of Association Between Design, Technology And People’!

Where we are headed

So my first question is this: where are we headed? I want to start with this frog and the story, which many of you will have heard before, about its relationship with boiling water. You remember how it goes: if you drop a frog into the pan when the water is boiling, it will leap out pretty sharpish.But if you put the frog into a pan of cold water, and then heat it steadily towards boiling point, the frog – unaware that any dramatic change is taking place – will just sit there, and slowly cook.

The frog story is one way to think about our relationship to technology. If you could drop a 25-year-old from the year 1800 straight into the bubbling cauldron of a western city today, I’m pretty sure he or she would leap straight back out, in terror and shock. But we, who live here, don’t do that. We have a vague sensation that things seem to be getting warmer and less comfortable – but for most of us, the condition of ‘getting warmer and less comfortable’ has been a constant throughout our lives. We’re used to it. It’s ‘natural’.

But is it? Preparing this lecture required me to step back for a moment to get a clearer view of the big picture. This really was quite a shock. It’s not so much that technology is changing quickly – change is one of the constants we have become used to. And it’s not that technology is penetrating every aspect of our lives: that, too, has been happening to all of us since we were born. No: what shocked me was the rate of acceleration of change – right now. It’s as if the accelerometer has disappeared off the right-hand side of the dial. From the point of view of a frog sitting on the edge of the saucepan – my point of view for today – the water has started to steam and bubble alarmingly. What does this mean? Should I be worried?

One aspect of the heating-up process is that many hard things are beginning to soften. Products and buildings, for example, which someone so insightfully described as ‘frozen software’. Pervasive computing begins to melt them.

Let me explain.

I borrowed this picture from an ad by Autodesk because it so neatly hints that almost everything man-made, and quite a lot made by nature, will soon combine hardware and software. Ubiquitous computing spreads intelligence and connnectivity to more or less everything. Ships, aircraft, cars, bridges, tunnels, machines, refrigerators, door handles, lighting fixtures, shoes, hats, packaging. You name it, and somone, sooner or later, will put a chip in it.

Whether all these chips will make for a better product, is one of the questions I want to discuss with you this morning. Look, for example, at the list of features on a high-end Pioneer car radio. Just one small product. There would be hundreds like it on the on the city street we just saw. Shall I tell you a strange thing? There’s no mention, on this endless list of features and functions, of an on-off switch! This car radio is about as complex as a jumbo jet. They also don’t have an on-off switch, as I discovered the first time I asked a 747 pilot to show me the ignition key.

Speaking of jumbos, I saw a great cartoon in the New Yorker depicting a 747 pilot, sitting in sitting-back interaction mode with a PDA in his hand: the caption says, “That’s cool: I can fly this baby through my Palm V.”

Our houses are going the same way, crammed full of chips and sensors and actuators and God knows what. And to judge by this picture increasingly bloated and hideous. Why is it that all these “house of the future” designs are so ghastly?

Increasingly, many of the chips around us will sense their environment in rudimentary but effective ways. The way things are going, as Bruce Sterling so memorably put it, “You will look at the garden, and the garden will look at you.”

The world is already filled with eight, 12, or 30 computer chips for every man, woman and child on the planet. The number depends on who you ask. Within a few years – say, the amount of time a child who is four years old today will spend in junior school – that number will rise to thousands of chips per person. A majority of these chips will have the capacity to communicate with each other. By 2005, according to a report I saw a couple of days ago, nearly 100 million west Europeans will be using wireless data services connected to the Internet. And that’s just counting people. The number of devices using the Internet will be ten or a hundred times more.

This explosion in pervasive connectivity is one reason, I suppose, why companies are willing to pay billions of dollars for radio spectrum. In the UK alone, a recent auction of just five bits of spectrum prompted bids totalling $25 billion. That’s an awful lot of money to pay for fresh air. It prompts one to ask: how will these companies recoup such investments? What’s to stop them filling every aspect of our lives with connectivity in order to recoup their investment?

The answer is: not a lot. We hear a lot in Europe about wired domestic appliances, and I can’t say the prospect fills me with joy. Ericsson and Electrolux are developing a refrigerator that will sense when it is low on milk and order more direct from the supplier. Direct from the cow for all I know! I can just see it. You’ll be driving home from work and the phone will ring. “Your refrigerator is on the line”, the car will say; “it wants you to pick up some milk on your way home”. To which my response will be: “tell the refrigerator I’m in a meeting.”

But pervasive computing is not just about talking refrigerators, or beady-eyed flowers. Pervasive means everywhere, and that includes our bodies.
I’m surprised that the new machines which scan, probe, penetrate and enhance our bodies remain so low on the radar of public awareness. Bio-mechatronics, and medical telematics, are spreading at tremendous speed. So much so, that the space where ‘human’ ends, and machine begins, is becoming blurred.

There’s no Dr Frankenstein out there, just thousands of creative and well-meaning people, just like you and me, who go to work every day to fix, or improve, a tiny bit of body. Oticon, in Denmark, is developing hundred-channel amplifiers for the inner ear. Scientists are cloning body parts, in competition with engineers and designers developing replacements – artificial livers and hearts and kidneys and blood and knees and fingers and toes. Smart prostheses of every kind. Progress on artificial skin is excellent. Tongues are a tough challenge, but they’ll crack that one, too, in due course.
Let’s do a mass experiment. I want you to touch your self somewhere on your body. Yes, anywhere! Don’t touch the same bit as the person next to you. Whatever you’re touching now, teams somewhere in the world are figuring out how to improve it, or replace it, or both. Thousands of teams, thousands of designs and techniques and innovations.

And this is just to speak of stand-alone body-parts. If any of these body parts I’ve mentioned has a chip in it – and most of them will – that chip will most likely be connectable. Medical telematics is one of the the fastest growing, and probably the most valuable, sector in telecommunications – the world’s largest industry. There’s been a discussion of patient records, and privacy issues; and the media are constantly covering such technical marvels as remote surgery.

But we hear far less about connectivity between monitoring devices on (or in) our bodies, on the one hand – and health-care practitioners, their institutions and knowledge systems, on the other. But this is where the significant changes are happening. Taking out someone’s appendix remotely, in Botswana, is no doubt handy if you’re stuck there, sick. But that’s a special occurrence.

Heart disease, on the other hand, is a mass problem. It’s also big business. Suppose you give every heart patient an implanted monitor, of the kind shown here. It talks wirelssly to computers, which are trained to keep an eye open for abnormalities. And bingo! Your body is very securely plugged into the network. That’s pervasive computing, too.

And that’s just your body. People are busying themselves with our brains, too. Someone already has an artificial hypocampus. British Telecom are working on an interactive corneal implant. BT, which spends $1 million an hour on R&D – or is it a million dollars a minute, I forget – are confident that by 2005 its lens will have a screen on it, so video projections can be beamed straight onto your retina. In the words of BT’s top tecchie, Sir Peter Cochrane, “You won’t even have to open your eyes to go to the office in the morning.” Thankyou very much, Sir Peter, for that leap forward!

By 2010, BT expect to be making direct links to the nervous system. This picture shows some of the ways they might do this. Links to the nervous system – links from it. What’s the difference? Presumably BT’s objective is that you won’t even have to wake up to go to the office…

It’s when you add all these tiny, practical, well-meant and individually admirable enhancements together that the picture begins to look creepy.
As often happens, artists and writers have alerted us to these changes first. In the words of Derrick de Kerckhove, “We are forever being made and re-made remade by our own inventions.” And Donna Haraway, in her celebrated Cyborg Manifesto, observed: “Late 20th-century machines have made thoroughly ambiguous the difference between natural and artificial, mind and body, self-developing and externally designed. Our machines are disturbingly lively, and we are frighteningly inert.”

Call this passive acceptance of technology into our bodies Borg Drift. The drift to becoming Borg features a million small, specialised acts. It’s what happens when knowledge from many branches of science and design converge – – without us noticing. We are designing a world in which every object, every building, – and every body – becomes part of a network service. But we did not set out to design such an outcome. How could it be? So what are we going to do about it?

This is the innovation dilemma I referred to at the beginning.

Innovation dilemma

To introduce the second part of my talk, I made this diagram. Every CHI talk has to have a ‘big concept’ diagram – and I’m not about to buck the trend.

The innovation dilemma is simply stated: many companies know how to make amazing things, technically. That’s the top line in my chart: it keeps heading manfully upwards. It could just as easily apply to the sale of mobile devices, Internet traffic, processor speeds, whatever. Think of it as a combination of Moore’s Law (which states that processor speeds double and costs halve every 18 months or so) and Metcalfe’s Law (which states that the value of a network rises in proportion to the number of people attached to it).

The dilemma is that we are increasingly at a loss to understand what to make. We’ve landed ourselves with an industrial system that is brilliant on means, but pretty hopeless when it comes to ends. We can deliver amazing performance, but we find value hard to think about.
And this is why the bottom line – emotionally if not yet financially – heads south.
The result is a divergence – which you see here on the chart – between technological intensification – the high-tech blue line heading upwards – and perceived value, the green line, which is heading downwards. The spheroid thing in the middle is us – hovering uneasily between our infatuation with technology, on the one hand, and our unease about its meaning, and possible consequences, on the other.
I have decided to call this Thackara’s Law: if there is a gap between the functionality of a technology, on the one hand, and the perceived value of that technology, on the other, then sooner or later this gap will be reflected – adversely – in the market. You can judge for yourselves whether the Nasdaq’s recent downturn confirms Thackara’s Law, or not.
www.doorsofperception.com/img/chi/018.jpg
In this next slide I have re-labelled the value line as the carrying capacity of the planet. I know there’s nothing worse than being made to feel guilty by ghastly downwards-heading projections about the environment. As an issue, ‘the environment’ seems to be all pain, and no gain. My point is that although we may push sustainability – or rather, the lack of it – out of our conscious minds, we feel it nonetheless. I believe that the carrying capacity of the planet, and our background anxiety about technological intensification, are two aspects of the same cultural condition.

The green line on my chart describes a synthesis of environmental and cultural angst. The two lines are diverging because for far too long we’ve been designing things without asking these simple questions: what is this stuff for? what will its consequences be? And, are we sure this is a good idea?

User experience design

This brings me to the third part of my talk, where I connect the concept of an innovation dilemma to the business of this conference, “designing the user experience”, which seems to be a major preoccupation of the new economy. My question is this: what kind of experiences should we be designing? And how should we be doing it?

Another way to think about this question is by changing the lines on the chart. What products or services might we design which exploit booming technology and connectivity – which are not, after all, going to go away – while also delivering the social quality, and environmental sustainability, that we also appear to crave?

How, in other words, might we make that green line turn upwards? One way is to shift the focus of innovation from work to everyday life. People are by nature social creatures, and huge opportunities await companies that find new ways to enhance communication and community among people in their everyday lives. ‘Social computing’, in a word. Or rather, two.

Social communications often do not have a work-related goal, so they don’t get much attention from industry. Low-rate telephone charges probably explain the low priority given to social communications by TelCos in their innovation. But social communication occupies a large amount of time in our daily lives. About two-thirds of of our conversational exchanges are social chitchat. These are different from the ‘purposive’, or task-related communications, that feature in most telecommunication advertising. All those busy executives rushing around being – well, busy. Not to say obnoxious.

Social communication among extended families and social groups is a huge and largely unexplored market. I discovered just how big is the potential as a member of a project called Presence. Presence is part of an important European Union programme called i3 (it stands for Intelligent Information Interfaces). Presence addressed the question: ‘How might we use design to exploit information and communication technologies in order to meet new social needs?’ In this case, the needs of elderly Europeans. Presence brought together companies, designers, social research and human factors specialists, and people in real communities in towns in three European countries.

We learned a valuable lessons in Presence: setting out to ‘help’ elders, on the assumption that they are helpless and infirm, is to invite a sharp rebuff. Unless a project team is motivated by empowerment, not exploitation, these ‘real-time, real-world’ interactions will not succeed. Sentimentality works less well, we found, than a hard-headed approach. Our elderly ‘actors’ reacted better when we decided to approach them more pragmatically as ‘knowledge assets’ that needed to be put to work in the information economy. Old people know things, they have experience, they have time. Looked at this way, a project to connect eldely people via the Internet became an investment, not a welfare cost.

We also evolved a hybrid form of co-authorship during Presence. Telecommunication and software companies routinely give prototype or ‘alpha’ products to selected users during the development process. Indeed, most large-scale computer or communication systems are never ‘finished’ – they are customised by their users continuously, working with the supplier’s engineers and designers. In Presence, too, elderly people were actively involved, along with designers, researchers, and companies, in the development of new service scenarios.

Designing with, rather than for, elderly people raises new process issues. Project leaders have to run research, development, and interaction with citizens in parallel, rather than in linear sequence. We learned that using multiple methodologies, according to need and circumstance, works best: there is no correct way to do this kind of thing. The most pleasing aspect was the way that designers and human factors came – if not to love, then at least to respect – each other. Once you get away from either/or – and embrace and/and – things loosen up amazingly.

Presence also raised fascinating issues to do with the design qualities of so-called ‘hybrid worlds’. As computing migrates from ugly boxes on our desks, and suffuses everything around us, a new relationship is emerging between the real and the virtual, the artificial and natural, the mental and material.

Social computing of the kind we explored in Presence is unexplored territory for most of us. I can think of few limits to the range of new services we might develop if we simply took an aspect of of daily life, and looked for ways to make it better. I even found a list of common daily activities which have deep cultural roots, but which we can surely improve. I took the list from E. O. Wilson’s book, Consilience, in which he reflects on the wide range of topics that anthroplogists and social researchers have studied, in relative obscurity, for several decades.

To recap on the story so far: We face an innovation dilemma: we know how to make things, but not what to make. To resolve the innovation dilemma, we need to focus on social quality and sustainability values first, and technology second. And I described, through the example of the Presence project, how one might take one aspect of daily life, and make it better in using information technology as one of the tools.

Usability of any kind used to be either ignored completely, or treated as a downstream technical specialisation. Many of you know, better than I, what it is like to be asked to ‘add’ usability to some complex, and sometimes pointless, artefact – after everyone else has done their thing.

Today, all that is changing. In the new economy, we hear everywhere, the customer’s experience is the product! Logically, therefore, the customer’s experience is critical to the health of the firm itself!

A new generation of companies has burst onto the scene in a dramatic way over the last couple of years to meet this new challenge. They are a new and fascinating combination of business strategy, marketing, systems integration, and design. Their names are on the lips of every pundit, and on the cover of every business magazine. I thought it worth looking at a couple of these new companies.

In Scient’s discussion of user experience, the word architect has been turned into a verb, as in “The customer experience centre architects e-business solutions”. For Scient, customer experience design capabilities include information architecture, user interface engineering, visual design, content strategy, front-end technology, and usability research. Scient proclaims with gusto that “customer experience is a key component in building a legendary brand”. True to these beliefs, Scient hired a CHI luminary, John Rheinfrank. John has become the Hegel of user experience design with the wonderful job title of “Master Architect, Customer Experience”.

Over at Sapient, Rick Robinson, previously a founder of e-Lab in Chicago, has been appointed “Chief Experience Officer”. Rick is proselytising for “experience modelling” which, he promises, “will become the norm for all e-commerce applications”. Experience design, whispers Sapient modestly, will “transform the way business creates products and services . . . by paying careful attention to an individual’s entire experience, not simply to his or her expressed needs”.

The group called Advance Design is an informal, sixty-strong workshop, meeting once or twice a year, convened by Clement Mok from Sapient and Terry Swack at Razorfish, and featuring most of the luminaries in the New Age companies I referred to just now. I reckon that the energy and rhetorics of “user experience design” probably originated here in the group of pioneers.

I cannot end my quick excursion into the new economy without mentioning Rare Medium, whose line on customer experience design falls somewhere between the Reverend Jerry Falwell and the Incredible Hulk. Rare talks about “the creation phase” in a project, then go on to describe the the so-called “heavy lifting” stage of the engagement, before they segue back into the last phase of the Rare methodology, “Evolution’.

Agenda for change

Now, I’m teasing good people here. I’m probably jealous that nobody made me a “Master Architect of Customer Experience”. Some of the language used by these new companies about customer experience design is a touch triumphalist. But this focus on customer experience design is a major step forwards from the bad old days – that is, the last 150 years of the industrial age – when the interests of users were barely considered. Besides, it’s tough out there. The new economy does not reward shrinking violets. But it’s because design and human factors are now being taken more seriously, that we need to be more self-critical now – not less.

To be candid, I worry that by over-promoting the concept of “user experience design” we may be creating problems for ourselves down the line.
Language matters. Let me quote you the following words from an article about last year’s CHI: “The 1999 conference on human factors in computing posed the following questions: What are the limiting factors to the success of interactive systems? What techniques and methodologies do we have for identifying and transcending those limitations? And, just how far can we push those limits?”

Do these words sound controversial to you? Probably not. They describe what CHI is about, right? But those innocuous words make me feel really uneasy.

Take the reference to “human factors in computing”. The “success of interactive systems” is stated to be our goal – not the optimisation of computing as a factor in human affairs. Do you consider yourself to be just a “factor’ in the system? I don’t think so. But CHI’s own title states just that. Language like this is insidious. It’s not about the success of people, and not the success of communities – but the success of interactive systems!

We say we’re user-centred, but we think, and act, system-centered.

My critique of system-centeredness is hardly new. The industrial era is replete with complaints that, in the name of progress, we wilfully subjugate human interests to the interests of the machine. Remember Thoreau’s famous dictum that, “We do not ride on the railroad – it rides on us”? The history of industrialisation is filled with variations on that theme.

In a generation from now – say, when the child I mentioned earlier has her first child – what will writers say about pervasive computing? I believe we should try to anticipate the critics of tomorrow, now.

As Bill Buxton (a leading interaction designer) would say: usable is not a value; useful is a value. Making it easier for someone to use a system does not, for me, make it a better system. Usability is a pre-condition for the creation of value – but that’s something different.

The words creation of value are important. I do not mean the delivery of value. Users create knowledge, but only if we let them. I recommend an excellent book by Robert Johnson called User-Centered Technology for its explanation that most rhetoric about user experience depicts users as recipients of content that has been provided for them.

A passive role in the use of a system someone uses is the antithesis of the hands-on interactions by which we learn about the non-technological world. At the extreme dumb end of the spectrum, you find the concept of “idiot-proofing” – the idea that most people know little or nothing of technological system and are seen as a source of error or breakdown. To me, I’m afraid, it’s the people who hold those views who are the real idiots.

Many of you may disagree vehemently with this, but I believe hiding complexity makes things worse. Interfaces which mask complexity render the user powerless to improve it. If a transaction breaks down, you are left helpless, unable to solve what might be an underlying design problem.

An architecture of passive relationships between user and system is massively inefficient. I agree with the argument that if a thing is worth using, people will figure out how to use it. I would go further: in figuring out how to use stuff, users make the stuff better. I’ll return to this idea in a moment.
The casual assumption that only designers understand complexity is related to another danger: the denigration of place. ‘Context independence’ and ‘anytime, anywhere funtionality’ are, for me, misguided objectives. If we are serious about designing for real life, then real contexts have to be part of the process. User knowledge is always situated. What people know about technology, and the experiences they have with it, are always located in a certain time and place.

I would go further, and assert that ‘context independence’ destroys value. Malcolm McCullough, who wrote a terrific book called Abstracting Craft, is currently exploring ‘location awareness’ and has become critical of anytime/anyplace functionality. “The time has arrived for using technology to understand, rather than overcome, inhabited space”, he wrote to me recently; “design is increasingly about appropriateness; appropriateness is shaped by context; and the richest kinds of contexts are places.”

Putting the interests of the system ahead of the interests of people exposes us to another danger: speed freakery. “Speed is God; Time is the Devil”, goes Hitachi’s company slogan. We’re constantly told that survival in business depends on the speed with which companies respond to changes in core technologies, and to shifts in our environments. I tend towards a contrary view, that industry is trapped in a self-defeating cycle of continuous acceleration. Speed may be a given, but – like usability – it is not, per se, a virtue.

I believe we need to begin designing for multiple speeds, to be more confident and assertive in our management of time. Some changes do need to be speed-of-light – but others need time.

We have to stop whingeing about the pressures of modern life and do something about them. One way, I propose, is to budget and schedule time for reflection. Such ‘dead’ time or ‘re-booting’ time is important for people and organisations alike. We need to distinguish between time to market and time in market – a lesson I predict will be learned the hard way by many of the ‘pure-play’ dot coms. Yes, industry needs concepts, but it also needs time to accumulate value. Connections can be multiplied by technology – but understanding requires time and place.

CHI goes to Hollywood

You may well object that your work is complicated enough as it is, without being subjected to my flaky and unrealistic demands. I sympathise with the anxiety that involving users on a one-to-one basis would lead to ‘flooding’, and that nothing would ever get done.

But let’s try to re-frame the question. Let’s return to my suggestion that we replace the word ‘user’ with the word ‘actor’. I like the word actor because although actors have a high degree of self-determination in what they do, they do their thing among an amazing variety of other specialists doing theirs. There’s the writer of the screenplay, for example. The screenplay holds a film together. Without a screenplay, no film would ever get made. A movie also has an amazing array of specialised skills and specialisms – craft experts – such as the lighting and sound guys – and all those “best boys” and “gaffers” and “chief grips” – who know whatever it is that they do!

The Hollywood Model makes a lot of sense to me when thinking about the collaborative design of complex interactive systems. As an experiment, I put all the keywords and specialisms listed in the CHI conference programme into these credits for a complex interactive system I’ve called THING. On these credits are all the disciplines and approaches needed to make THING.

Let’s assume that that the producer of THING is a company. Companies have money, and they co-ordinate pojects. And we already agreed that people, formerly known as ‘users’, are the actors. The obvious question arises: who is the scriptwriter of THING? And who is the director?

I think the role of scriptwriter might possibly go to designers. Designers are great at telling stories about how things might be in the future. Someone has to make a proposal to get the THING process started. This picture, of a next- generation mobile ear device was designed by Ideo mainly to stimulate industry to think more broadly about wirelessness. One can imagine that such an image might trigger a large and complex project by a TelCo.
Like scriptwriters, designers tend to play a solitary rather than collaborative role in the creative process. Clement Mok (Chief Creative Officer of Sapient) put it rather well, in a magazine called Communications recently: “Designers are trained and genetically engineered to be solo pilots. They meet and get the brief, then they go off and do their magic.” Clement added that he thought software designers and engineers are that way too.
This suggests to me that, although designers should occupy the role of screenwriter, they should not necessarily be the director and run the whole show. Designers are not good at writing non-technological stories.

These sunglasses, also by Ideo, are a high-tech gadget whose function is to protect the wearer from intrusive communications. But in my opinion, you don’t protect privacy with gadgets, you protect it by having laws and values to stop people filling every cubic metre on earth with what Ezio Manzini so eloquently terms ‘semiotic pollution’. For me, gadget-centredness is the same as system-centredness – and neither of the two is properly people-centered. This is why designers are not, for me, eligible automatically to be the director of THING.

Don’t get me wrong. People do like to be stimulated, to have things proposed to them. Designers are great at this. But the line between propose and impose is a thin one. We need a balance. In my experience, the majority of architects and designers still think it is their job to design the world from outside, top-down. Designing in the world – real-time, real-world design – strikes many designers as being less cool, less fun, than the development of blue-sky concepts.

So who gets to be director of THING? I say: we all do. In the words of Nobel Laureate Murray Gell Mann, innovation is an ’emergent phenomenon’ that happens when there is interaction between different kinds of people, and disparate forms of knowledge. We’re talking about a new kind of process here – design for emergence. It’s a process that does not deliver finished results. It may not even have a ‘director.’

Perhaps we might think about the design of pervasive computing as a new kind of street theatre. We could call it the Open Source Theatre Company. Open Source is revolutionary because it is bottom-up; it is a culture, not just a technique. Some of the most significant advances in computing – advances that are shaping our economy and our culture – are the product of a little-understood hacker culture that delivers more innovation, and better quality, than conventional innovation processes.

Open Source is about the way software is designed and, as we’ve seen, ‘software’ now means virtually everything. Computing and connectivity permeate nature, our bodies, our homes. In a hybrid world such as this, networked collaboration of this kind is, to my mind, the only way to cope.

ARTICLES OF ASSOCIATION BETWEEN DESIGN, TECHNOLOGY, AND THE PEOPLE FORMERLY KNOWN AS USERS
\
The interaction of pervasive computing, with social and environmental agendas for innovation, represents a revolution in the way our products, our systems are designed, the way we use them – and how they relate to us.

Locating innovation in specific social contexts can, I am sure, resolve the innovation dilemma I talked about today. Designing with people, not for them, can bring the whole subject of ‘user experience’ literally to life. Looked at in this way, success will come to organisations with the most creative and committed customers (sorry, ‘actors’).

The signs of such a change are there for all to see. Enlightened managers and entrepreneurs understand, nowadays, that the best way to navigate a complex world is through a focus on core values, not on chasing the latest killer app. (This picture illustrates the core values of the French train company, SNCF).

Business magazines are full of talk about a transition from transactions, to a focus on relationships. We are moving from business strategies based on the ‘domination’ of markets, to the cultivation of communities. The best companies are focussing more on the innovation of new services, and new business models, than on new technology per se. They are striving to change relationships, to anticipate limts, to accelerate trends.
As designers and usability experts we need to study, criticise and adapt to these trends. Not uncritically, but creatively.

To conclude my talk today, I have drafted some “Articles of Association Between Design, Technology and The People Formerly Known As Users”. Treat them partly as an exercise, partly as a provocation. They go like this.

Articles of Association Between Design, Technology and The People Formerly Known As Users

Article 1:
We cherish the fact that people are innately curious, playful, and creative. We therefore suspect that technology is not going to go away: it’s too much fun.
Article 2:
We will deliver value to people – not deliver people to systems. We will give priority to human agency, and will not treat humans as a ‘factor’ in some bigger picture.
Article 3:
We will not presume to design your experiences for you – but we will do so with you, if asked.
Article 4:
We do not believe in ‘idiot-proof’ technology – because we are not idiots, and neither are you. We will use language with care, and will search for less patronising words than ‘user’ and ‘consumer’.
Article 5:
We will focus on services, not on things. We will not flood the world with pointless devices.
Article 6:
We believe that ‘content’ is something you do – not something you are given.
Article 7:
We will consider material end energy flows in all the systems we design. We will think about the consequences of technology before we act, not after.
Article 8:
We will not pretend things are simple, when they are complex. We value the fact that by acting inside a system, you will probably improve it.
Article 9:
We believe that place matters, and we will look after it.
Article 10:
We believe that speed and time matter, too – but that sometimes you need more, and sometimes you need less. We will not fill up all time with content.

Which is good a moment as any, I think, for me to end. Thank you for your attention.

(This text was John Thackara’s keynote speech at CHI2000 in The Hague. CHI is the worldwide forum for professionals who influence how people interact with computers. 2,600 designers, researchers, practitioners, educators, and students came to CHI2000 from around the world to discuss the future of computer-human interaction.)

Posted in most read | Leave a comment

Objectionable objects: the failure of Workspheres

At the invitation of Paola Anotonelli, one of the world leading design curators and an eminence at at MoMA in New York, I spent a most enjoyable year talking with her, Aura Oslapas (from Stone Yamashita), Bruce Mau, and Larry Keeley, about the future of work and what that future portended for design. Paola’s show was an enormous smash hit. – but I was disappointed how little of our ‘beyond the object’ thinking made into ino the exhibition.
Are museums a menace?

Read More »

Posted in [no topic] | Leave a comment

Local knowledge: the design and innovation of tomorrow’s services (DoorsEast 1, Ahmedabad, India, 2000)

doorseast1.nid_workshop.jpg kopie.jpeg
The purpose of DoorsEast 1, a memorable week in Ahmedabad, was to accelerate the exchange of people, knowledge and experiences among Indian and European designers and internet entrepreneurs. We wanted to know: what can western interaction designers learn from Indian design and internet culture? and, what are the prospects for future joint work between the two communities?

doorseast1.KvR@telecom.jpg
Within a single week we discussed scenarios for using all manner of internet tools in different Indian contexts: producer and consumer cooperatives; smart distribution systems; horizontal markets; vertical nets ; enhanced information flows; auctions; reccer systems; desert-based WAP applications; cows with unique IP addresses. Name any internet fad: someone in Gujarat discussed it during our visits.

These were not sentimental or fanciful discussions: questions of access, and cost, cropped up repeatedly – but most of our Indian hosts believed technical solutions were feasible. Barriers were likely to be institutional, not technical. they said. And the best way to break down institutional barriers, we all agreed, was by showing policy makers working prototypes or persuasive simulations of the services we had in mind.

doorseast1.india_03.jpg

Our week in Ahmedabad lit a flame: it led to the Doors of Perception events we have done in India since….

DoP_India_3xPosters.png

Posted in [no topic] | Leave a comment