We are here to create

A conversation with Kai-Fu Lee (Edge),

Read the full article here.

My original dream of finding who we are and why we exist ended up in a failure. Even though we invented all these wonderful tools that will be great for our future, for our kids, for our society, we have not figured out why humans exist. What is interesting for me is that in understanding that these AI tools are doing repetitive tasks, it certainly comes back to tell us that doing repetitive tasks can’t be what makes us humans. The arrival of AI will at least remove what cannot be our reason for existence on this earth. If that’s half of our job tasks, then that’s half of our time back to thinking about why we exist. One very valid reason for existing is that we are here to create. What AI cannot do is perhaps a potential reason for why we exist. One such direction is that we create. We invent things. We celebrate creation. We’re very creative about scientific process, about curing diseases, about writing books, writing movies, creative about telling stories, doing a brilliant job in marketing. This is our creativity that we should celebrate, and that’s perhaps what makes us human.

KAI-FU LEE, the founder of the Beijing-based Sinovation Ventures, is ranked #1 in technology in China by Forbes. Educated as a computer scientist at Columbia and Carnegie Mellon, his distinguished career includes working as a research scientist at Apple; Vice President of the Web Products Division at Silicon Graphics; Corporate Vice President at Microsoft and founder of Microsoft Research Asia in Beijing, one of the world’s top research labs; and then Google Corporate President and President of Google Greater China. As an Internet celebrity, he has fifty million+ followers on the Chinese micro-blogging website WeiboAs an author, among his seven bestsellers in the Chinese language, two have sold more than one million copies each. His first book in English is AI Superpowers: China, Silicon Valley, and the New World Order (forthcoming, September). Kai-Fu Lee's Edge Bio page

Lee_AI_cover.png

We are here to create

Kai-Fu Lee

The question I always ask myself, just like any human being, is who am I and why do I exist? Who are we as humans and why do we exist? When I was in college, I had a much more naïve view. I was very much into computers and artificial intelligence, and I thought it must be the case that I’m destined to work on some computer algorithms and, along with my colleagues, figure out how the brain works and how the computer can be as smart as the brain, perhaps even become a substitute of the brain, and that’s what artificial intelligence is about.

That was the simplistic view that I had. I pursued that in my college, in my graduate years. I went to Carnegie Mellon and got a PhD in speech recognition, then went to Apple, then SGI, then Microsoft, and then to Google. In each of the companies, I continued to work on artificial intelligence, thinking that that was the pursuit of how intelligence worked, and that our elucidation of artificial intelligence would then come back and tell us, "Ah, that’s how the brain works." We replicated it, so that’s what intelligence is about. That must be the most important thing in our lives: our IQ, our ability to think, analyze, predict, understand—all that stuff should be explicable by replicating it in the computer.

I’ve had the good fortune to have met Marvin Minsky, Allen Newell, Herb Simon, and my mentor, Raj Reddy. All of these people had a profound influence on the way I thought. It’s consistent that they too were pursuing the understanding of intelligence. The belief at one point was that we would take the human intelligence and implement it as rules that would have a way to act as people if we provided the steps in which we go through our thoughts.

 For example, if I’m hungry, then I want to go out and eat. If I have used a lot of money this month, I will go to a cheaper place. A cheaper place implies McDonald’s. At McDonald’s I avoid fried foods, so I just get a hamburger. That "if, then, else" is the way we think we reason, and that’s how the first generation of so-called expert systems, or symbolic AI, proceeded. I found that it was very limiting because when we wrote down the rules, there were just too many.

There was a professor at MCC (Microelectronics and Computer Consortium), named Doug Lenat, who is one of the smartest people I know. He hired hundreds of people to write down all the rules that we could think of, thinking that one day we’d be done and that would be the brain. Apple and Microsoft funded his research. I remember visiting him, and he was showing me all these varieties of flowers and sharing his understanding of what type of a flower this was, and which flowers had how many petals and what colors. It just turns out that the knowledge in the world was just too much to possibly enter, and their interactions were too complex. The rule-based systems, that engine, we didn’t know how to build it.                                 

That was the first wave. People got excited, thinking we could write rules, but that completely failed, resulting in only maybe a handful of somewhat useful applications. That led everybody to believe AI was doomed and not worth pursuing.

I was fortunate to have been with the second wave, and that coincided with my PhD work at Carnegie Mellon. In that work, I wondered if we could use some kind of statistics or machine learning. What if we collected samples of things and trained the system? These could be samples of speech to train the different sounds of English, samples of dogs and cats to train recognition of animals, etc. Those resulted in pretty good results at the time. The technology I developed and used in my PhD thesis was called "Hidden Markov Models." It was the first example of a speaker independent speech recognition system, which was, and still is, used in many products. For example, hints of my work carried over by people who licensed the work or who worked on the team are evident in Siri, in the Microsoft speech recognizer, and other technologies used in computer vision and computer speech. I did that work at Carnegie Mellon in the ‘80s, got my thesis in ’88, and I continued to work at Apple from ’90 to ’96, then at Microsoft Research, around the year 2000. 

We were optimistic that extrapolation of this work should work because we saw results improving. But after a decade of work, we saw the significant improvements were reaching an asymptote. It wasn’t going up any higher, so we were frustrated. Again, a number of people said, "You can recognize 1000 words, you can recognize 100 objects, but this is not extensible. Humans can understand infinite vocabulary, even new words that are made up. This is not smart. This is not AI." Then came the second crash of artificial intelligence because it didn’t demonstrate that machines were able to do what humans can do.

In the first wave, I had the good luck of getting to know the psychologist and computer scientist Roger Schank. In fact, one of his students was an advisor of mine in my undergrad years. Those were the experiments that led me to believe that expert systems could not scale, and that our brains probably didn’t work the way we thought they did. I realized that in order to simplify our articulation of our decision process, we used "if, then, else" as a language that people understood, but our brains were much more complex than that.

During the second wave, in my thesis and PhD, I read about Judea Pearl’s work on Bayesian networks. I was very much influenced by a number of top scientists at IBM including Dr. Fred Jelinek, Peter Brown, and Bob Mercer. They made the mark in making statistical approaches become the mainstream, not only for speech but also for machine translation. I owe them a lot of gratitude. We still got stuck, but not because the technologies were wrong; in fact, the statistical approaches were exactly right.

When I worked on Hidden Markov Models at Carnegie Mellon in the late '80s, Geoff Hinton was right across the corridor working on neural networks, which he called "Time Delayed Neural Networks." Arguably, that was the first version of convolutional neural networks, which is now the talk of the town as deep learning becomes a dominant technology.

But why did that wave of statistical and neural net-based machine learning not take off? In retrospect, it had nothing to do with technology—most of the technology was already invented. The problem was we just that we didn't have enough training data. Our brains work completely differently from the way these deep-learning machines work. In order for deep-learning machines to work, you have to give it many orders of magnitude more training data than humans are used to. Humans can see maybe hundreds of faces and start to recognize people, but these deep-learning neural networks would love to see billions of faces in order to become proficient.

Of course, once they're proficient, they’re better than people. That is the caveat. But at that time, we simply didn’t have enough training data, nor did we have enough computing power to push these almost discovered technologies to the extreme. Google was the company that began to realize that in order to do search you need a lot of machines, and you need them to be parallel. And then Jeff Dean and others at Google found that once you had those parallel machines you could do more than search, you could build AI on top of that. Then they found that to do AI, you needed specialized chips to do those well. Then came NVidia’s GPUs. And then Google did its own TPUs. It's been an interesting progression. It was a fortuitous incident that Google chose to do search and search needed servers, and they had Jeff Dean, that evolved to today’s architecture of massively parallel GPU- or TPU-based learning that can learn from a lot more data from a single domain.

New technologies developed based on this massively parallel machine-learning architecture built on GPUs and new accelerators. More and more people were able to train face recognizers, speech recognizers, image recognizers, and also apply AI to search and prediction. Lots of Internet data came about. Amazon uses it to help predict what you might want to buy, Google uses it to predict what ad you might want to click on and potentially spend money, and Microsoft uses it. In China we have Tencent and Alibaba. Many applications are coming about based on the huge amounts of Internet data.

At the same time technologies were progressing, Geoff Hinton, Yann LeCun, and Yoshua Bengio were the three people who continued to work on neural networks, even though in the early 2000s they were no longer in the mainstream. In the ‘80s, this work was a novelty, and breakthrough statistical work indicated that these networks didn’t scale. Funding agencies then abandoned them, conferences stopped accepting their papers, but these three researchers kept at it with small amounts of funding to refine and develop better algorithms, and then more data came along. A breakthrough came with the creation of new algorithms, sometimes called "convolution neural networks," and now known as "deep learning." Some variant of the work is also related to reinforcement training, transfer learning.

This set of technologies that emanated from these three professors began to blossom in the industry. Speech recognition systems built by top companies are beating human performance, and it's the same with face recognition companies and image recognition. There are ecommerce implications, speaker/user identification; it was applied to Internet data, higher prediction for Amazon, making more money in the process; better predictions for Facebook in terms of how to rank your news feed; better search results from Google. Deep neural networks started to get used in Google in the late 2000s, and in the last seven or eight years it blossomed to reach almost everywhere. Architectures were coming out and more intelligent systems were being developed.

Of course, the event that ignited the whole world was AlphaGo—beating Master Lee from Korea and then Master Ke from China by increasingly large gaps and, more recently, coming out with a paper that said that AlphaGo could be trained with no human knowledge from scratch. These are all breakthroughs that caused the whole world to begin to know that this time AI is for real. We had something in the second wave, the neural nets and statistical approaches were right, we just didn’t have enough data, enough compute power, and enough advancements of the technologies at the time to make it happen. But now we do.

AI is taking off everywhere. There were new schools of thought that came about. One set of people started to project back at our original question: who are we and why do we exist? These people make the extrapolation that because AlphaGo was able to improve itself exponentially over the past two or three years, if we push that to other domains, we’re going to have machines that will be superintelligent, that could either be plugged into our heads and become our human augmentation, or they’ll be evil and take over mankind.

I want to just shut down that train of thought. That’s just inaccurate. As advanced as today's AI is, and as much as it is doing a phenomenal job beating humans in playing games, speech recognition, face recognition, autonomous vehicles, industrial robots, it is going to be limited in the following ways: Today’s AI, which we call weak AI, is an optimizer, based on a lot of data in one domain that they learn to do one thing extremely well. It’s a very vertical, single task robot, if you will, but it does only one thing. You cannot teach it many things. You cannot teach it multi-domain. You cannot teach it to have common sense. You cannot give it emotions. It has no self-awareness, and therefore no desire or even understanding of how to love or dominate a human being.

All the dystopian talk is just nonsense. It’s too much imagination. We’re seeing AI going into new applications in what appears to be an exponential growth, but it's an exponential growth of applications of the mature technologies that exist. That growth will be over once we develop all of them. Then we have to wait for more breakthroughs for further advancement of AI. But you cannot predict further advancements.

If you look at the history of AI, the deep-learning type of innovation just happens one time. This is once since 1957, one time out of sixty years that we have one breakthrough. You cannot go ahead and predict that we’re going to have a breakthrough next year, and then the month after that, and then the day after that. That would be exponential. Exponential adoption of applications is, for now, happening. That’s great, but the idea of exponential inventions is a ridiculous concept. The people who make those claims and who claim singularity is ahead of us, I think that’s just based on absolutely no engineering reality.

Today’s AI only does one task at a time, and it’s great as a tool. It’s great at creating value. It will replace many human job tasks and some human jobs. That is what we should think about, not about this grand, strong AI, where the machine is like a human and can reason across domains and have common sense. That is not at all predictable from today’s progress.

Might it happen some day, a hundred or a thousand years from now? I suppose anything is possible. But we should probably focus our energy on what is here today. And what’s here today is the super optimizers that can do a better job than humans in picking stocks, in making loans, in doing customer support, in doing telemarketing, in doing assembly line work, in doing assistance work, in doing broker’s work, in doing paralegal work, and doing it better than humans. They’re taking those over and freeing human time, allowing us to do what we really love and what we do best. That’s the opportunity of a lifetime, not this dystopia of computers becoming superintelligent.

~ ~ ~ ~

I’m currently working on venture capital. We’re working with startup companies. We see a lot of progress in the results by just taking the already well proven algorithms and applying them to real world problems. We’re now at an age where all the fruit trees have blossomed. There are so many low-hanging fruits that we should maximize our opportunity by taking each one to create value. In order to create value, we don’t yet need new advances in science. The scientists should go off and invent the next better than deep-learning algorithm, and we, in the product entrepreneurial venture space, should maximize value. That value is tremendous because it will eventually replace all of our routine tasks and do so better than people in creating so much value for society.

There have been a number of studies that have talked about AI as today’s technology, without new inventions, applied to finance, hospital, government, and education, and all kinds of areas in creating a lot of value. You can look at McKinsey, Goldman Sachs, and PWC. We’ll be able to do many of the historical things much better than we do. And basically, the algorithm running on just the electricity will outdo people in all of these tasks. Whether you look at it as an increase in the value chain, or replacing the human routine work, the amount of value is tremendous. These companies are predicting $15, 20, 30 trillion of value in the next ten to fifteen years. That makes for exciting areas to do investment, to do entrepreneurship.

Suppose we build this smart paralegal system, this system that can write better than the reporters for short articles, or this loan officer replacement program, or the assembly line, or the receptionist, and so on. Well, what happens to the people who are in those jobs? In an abstract world, if we were to reconstruct the world from scratch, we’d be very happy human beings because we’d have machines do these repetitive and routine tasks. We can then elevate ourselves to be thinking, inventing, creating, socializing, having fun, and getting hobbies. It would be an amazing life. We’re all going to face a very challenging next fifteen or twenty years, when half of the jobs are going to be replaced by machines. Humans have never seen this scale of massive job decimation.

The industrial revolution took a lot longer. And the industrial revolution created jobs while it replaced jobs. When it took a few artisans months to create a car, an automobile, and assembly line allowed that to happen in a fraction of the time by dividing the work into little chunks. Some jobs disappeared. Many jobs were created. Car prices came down. And then the job employment rate went up.

Artificial intelligence is different because when we make a loan officer that decides whether to give someone a loan or not, based on purely quantitative information, that loan officer will be better than 99 percent of all loan officers out there. They would be replaced outright because it is a simple, single domain optimization problem—feed everything we know about a person in, and out comes the likelihood of repayment versus default. That rate is a quantitative computation based on a huge amount of data that no human can possibly match. The people in those jobs will be out of jobs and will have to do something else—same with security, with paralegal, with accounting, and even with reporters and translators.

We're seeing speech-to-speech translation work as well as amateur translators now. They're not yet at a professional level, but good enough for travel. It’s possible that eventually we don’t have to learn foreign languages because we’ll just have a little earpiece that translates what other people say. We'll have this wonderful addition in convenience, productivity, value creation, saving time, but at the same time we have to be cognizant that translators will be out of jobs.

When we think about the industrial revolution, we see it has having done a lot of good by creating a lot of jobs, but the process was painful and some of the tactics were questionable. We’re going to see all those issues come up again and even worse in the AI revolution. In the industrial revolution, many people were replaced and displaced, and they had to live in destitution. The overall employment and the wealth were created, but it was made by a small number of people.

Fortunately, the industrial revolution lasted a long time, so this gradual shift allowed for governments to deal with one group at a time whose jobs were being displaced. During the industrial revolution, a certain work ethic was perpetuated: The capitalists wanted the worst of the world to believe that if they worked hard, even if it’s a routine repetitive job, they will get compensated, they will have a certain degree of wealth, and that will give them dignity and self-actualization. That surely isn’t how we want to be remembered as mankind.

At the same time, that is how most people on earth believe in their current existence. And that’s extremely dangerous to have now because AI is going to be taking most of those jobs that are routine and repetitive. It’s not just an issue of some people losing jobs and not getting a salary. That potentially could be taken care of with UBI or some sort of income scheme. The issue is the people losing the jobs used to feel their reason for existence was work ethic, working hard, getting that house, and providing for the family.

Repetitive work, to the extent that you like to do photography or calligraphy, you can repeatedly do it and think every piece is a little bit different—you enjoy it, you’re growing, you’re getting calmness, you’re growing as a person, that’s all fine. But if you put someone in the back room of a restaurant where all you do is cut onions all day, or if you put someone in a factory where all you do is screw iPhones together, or if you are a junior accountant and all you do is check for the numbers in the books—those jobs are not giving you enrichment. They’re not different, they’re not interesting, and they’re not advancing you as a human being. The people who benefited from the industrial revolution, it was to their advantage that most of the world thought that way, that then they can get hard working people to grow their pocketbooks, their wealth.

This whole advancement of artificial intelligence got me rethinking about the reason that I started on this journey. It was to figure out how our brain works, and it wasn’t through expert systems, it wasn’t through neural networks, and now deep learning. Does that really answer the question? I have just recently realized that this journey partly has been tremendously successful. We’re about to see deep learning create tens of trillions of dollars of wealth for mankind. We’re about to see many routine jobs being replaced so we have more free time on our hands.

At the same time, this deep learning has nothing to do with the way our brain works. We have love, we have emotion, and we have self-awareness. Our DNA is iterated over the billions of years to provide for our survival on this planet, and all of the things that make us human have nothing to do with these so-called AI. And when we say narrow AI, the AI that’s optimizing, that’s really all it is. It is a machine. It is a tool. Even if it is driving us around in an autonomous vehicle, it is not intelligently thinking, and it is not at all able to reason with common sense.

Even artificial intelligence is somewhat of a misnomer. When we think of intelligence, there are many types of things that aggregate to cause us to think someone is intelligent. If someone does only one thing extremely well, do we call that personal intelligent? If they can’t explain why they did what they did, other than knowing the most probable stock to buy today is this stock, or that we should not loan this person money because the default likelihood is 23 percent. Is that really intelligent? I don’t think so.

My original dream of finding who we are and why we exist ended up in a failure. Even though we invented all these wonderful tools that will be great for our future, for our kids, for our society, we have not figured out why humans exist. What is interesting for me is that in understanding that these AI tools are doing repetitive tasks, it certainly comes back to tell us that doing repetitive tasks can’t be what makes us humans. The arrival of AI will at least remove what cannot be our reason for existence on this earth. If that’s half of our job tasks, then that’s half of our time back to thinking about why we exist. One very valid reason for existing is that we are here to create. What AI cannot do is perhaps a potential reason for why we exist. One such direction is that we create. We invent things. We celebrate creation. We’re very creative about scientific process, about curing diseases, about writing books, writing movies, creative about telling stories, doing a brilliant job in marketing. This is our creativity that we should celebrate, and that’s perhaps what makes us human.

Another angle of what AI cannot do is love. We love each other, we truly connect with people, and we want to help people. By helping people, we get a sense of self-worth and dignity and self-actualization. It also suggests that perhaps it is the ability to create and the ability to love that are the reasons why we exist.

AI has gone all the way around to teach me that our brain is too hard to understand. It is not just a piece of the organ; it’s our whole body and our whole way of thinking. It’s our whole evolution. What AI has done is maybe come back to say, "Hey, Kai-Fu, maybe you and mankind have been fooled by the industrial revolution into thinking doing repetitive tasks can possibly be a reason for your existence. If you think that way, think no longer. AI is taking all those jobs away. Only what AI cannot do can possibly be a reason for your existence. And it is perhaps about creativity, it is perhaps about love, it is perhaps something else, but it sure isn’t routine jobs."

That has stricken home for me to realize that I was naively pursuing replicating the human brain and not at all accomplishing that purpose. The people who work with us accomplish great things with AI as a tool that can solve problems, make money, and remove drudgery from our lives. We have to go back to square one and think about who are we. Are we here to create, to love, or is it something else?

If we look at the history of computing, we started by connecting people to information on one computer, and then the Internet connected all the computers together. We were then able to access more information. They weren’t easy to find, so search engines helped us find them, and then we wanted to be beyond information, so the social network connected us with each other. We wanted to connect anytime, anywhere, so the mobile allowed us to connect with people and information anywhere. That’s about where we are.

We can see a number of major additional enhancements coming. In China, for example, payment is instantaneous, frictionless, micro-payments, peer-to-peer. Anybody can pay anybody. And that will become another platform for innovation. Those types of things will continue to grow and our network of people-to-people, people-to-information, and then payment and access and data will get more accumulated, and AI will go in and make very intelligent recommendations. In the future, the system will work as a whole with people and machines.

IOT will be the next step that connects devices together. IOT has been talked about for a long time. They haven’t yet taken off. In the near term, we can already anticipate smart microphones, video cameras can aggregate the content and make very intelligent predictions about traffic, about people, about what they want. Imagine when we’re online, our cookie tells Amazon what we looked at, what we bought, and what we didn’t buy. That’s used to feed Amazon’s intelligence about what to recommend and sell to us. There are already stores: Amazon Go is here, and there are stores in China that have cameras to know who has entered the room, who has picked up what product, who has bought what product. Those will come back to become a very powerful profile that integrates online/offline.

Essentially, we’re looking to a future where everything about us, online and offline, will become profiles, will be used to give us convenience. It will be the next big step towards trading privacy for convenience. Some people won’t be comfortable with it, but it's an eventuality that probably cannot be avoided. Social networks will also grow to be more real name tractable and accountable, and also the data from it will generate a lot of value.

All of this is going towards people and devices getting connected, data being extracted to create intelligence. That intelligence will deliver convenience and value to the user. Those of us who want that convenience will need to trade our privacy. This is an interesting question, but I don’t think most people can say no to it.

As we think about all the benefits from AI, there are a number of issues one needs to be concerned about. One issue, as I talked about, was the job losses and how to deal with that. Another issue is the haves and have nots. The people who are inventing these AI algorithms, building AI companies, they will become the haves. The people whose jobs are replaced will be the have nots. And the gap between them, whether it’s in wealth or power, will be dramatic, and will be perhaps the largest that mankind has ever experienced.

Similarly, the companies that have AI and the companies that are traditional and slow to shift will have large gaps as well. Lastly, and perhaps most difficult to solve, is the gap between countries. The countries that have AI technology will be much better off. They’ll be creating and extracting value. The countries that have large populations of users whose data is gathered and iterated through the AI algorithm, they’ll be in good shape. The US and China are in good shape.

The countries that are not in good shape are the countries that have perhaps a large population, but no AI, no technologies, no Google, no Tencent, no Baidu, no Alibaba, no Facebook, no Amazon. These people will basically be data points to countries whose software is dominant in their country. If a country in Africa uses largely Facebook and Google, they will be providing their data to help Facebook and Google make more money, but their jobs will still be replaced nevertheless.

Think about a situation in the US or China, where all the AI companies will take all the data, make so much money. People would be displaced, but potentially we can imagine the government redistributing that wealth from the people who made it, perhaps as a tax, and distributing to those who have not, perhaps as UBI or some variant. The US and China are okay. But think about another country. They have only the displaced and not the creative, or mostly the displaced and very few valuable companies. Where will the tax be to take the money to give to the displaced? That’s the big issue.

With the US and China being very powerful in terms of their AI technologies, the companies that benefit from the data, and having a lot of data from their own countries and other countries, they will be very well off. Other countries will be in a difficult position. We’re seeing Europe put into some challenges based on this issue. Their choice of response was to enforce antitrust laws on American companies as a way to collect money from them. That surely is not a sustainable approach. There will be poorer countries in developing and underdeveloped worlds who used to have an ambition and aspiration to, like China, use lower cost labor to win business in manufacturing, and eventually get on to the developed county path. That dream is probably no longer feasible. The low cost labor in a country that may have propelled China from a poor country to a relatively wealthy country, that formula is no longer available because AI and robots will be doing the manufacturing and the labor work.

The large population that was China’s asset to its rise will become a liability to many countries. The larger the population, the worse off you are, unless that population has a significant enough percentage that can create value, can build AI, can build companies, and can make money. This global geopolitical future is very worrisome because you might have some countries with no choice but to become a vassal state to the US or China: You got my data, I will do what you want, and you help me feed the poor people. That would be one very direct way to describe a very worrisome outcome.

Another outcome might be the state becomes unable to manage the poverty and the restlessness in the country. That could result in a country with a lot of distress, or anarchy, or maybe it will be another North Korea. You can imagine when the country is desperate and seeing no future of creating wealth, being left behind, that’s very worrisome. One could be optimistic and naïve and say, "Well, hopefully someday there will be a world government because there is enough money to go around." Historically, looking at all the foolish things that we have done as mankind, I have very little hope that’s going to happen. These are issues we need to bring up in dealing with the haves and have nots in the widening gap of countries and of people.

I don’t have the solutions, but if we want to come back to the question of why we exist, we at this point can say we certainly don’t exist to do routine work. We perhaps exist to create. We perhaps exist to love. And if we want to create, let’s create new types of jobs that people can be employed in. Let’s create new ways in which countries can work together. If we think we exist to love, let’s first think how we can love the people who will be disadvantaged.

Source: https://www.edge.org/conversation/kai_fu_l...