Picked topics
Avantium
bvermeulen bvermeulen - posts: 146
Updated 2 months, 3 weeks ago by bvermeulen

Eindelijk is hij af

Picked topics
ASML annual report 2023
bvermeulen bvermeulen - posts: 146
Updated 3 months, 3 weeks ago by bvermeulen

ASML annual report 2023

Picked topics
Why the Past 10 Years of American Life Have Been Uniquely Stupid 1/2
bvermeulen bvermeulen - posts: 146
Updated 2 years, 7 months ago by bvermeulen

Why the Past 10 Years of American Life Have Been Uniquely Stupid

It’s not just a phase.

By Jonathan Haidt
Illustrations by Nicolás Ortega

April 11, 2022

What would it have been like to live in Babel in the days after its destruction? In the Book of Genesis, we are told that the descendants of Noah built a great city in the land of Shinar. They built a tower “with its top in the heavens” to “make a name” for themselves. God was offended by the hubris of humanity and said:
Look, they are one people, and they have all one language; and this is only the beginning of what they will do; nothing that they propose to do will now be impossible for them. Come, let us go down, and confuse their language there, so that they will not understand one another’s speech.
The text does not say that God destroyed the tower, but in many popular renderings of the story he does, so let’s hold that dramatic image in our minds: people wandering amid the ruins, unable to communicate, condemned to mutual incomprehension.

The story of Babel is the best metaphor I have found for what happened to America in the 2010s, and for the fractured country we now inhabit. Something went terribly wrong, very suddenly. We are disoriented, unable to speak the same language or recognize the same truth. We are cut off from one another and from the past.
It’s been clear for quite a while now that red America and blue America are becoming like two different countries claiming the same territory, with two different versions of the Constitution, economics, and American history. But Babel is not a story about tribalism; it’s a story about the fragmentation of everything. It’s about the shattering of all that had seemed solid, the scattering of people who had been a community. It’s a metaphor for what is happening not only between red and blue, but within the left and within the right, as well as within universities, companies, professional associations, museums, and even families.

Enjoy a year of unlimited access to The Atlantic—including every story on our site and app, subscriber newsletters, and more.
Babel is a metaphor for what some forms of social media have done to nearly all of the groups and institutions most important to the country’s future—and to us as a people. How did this happen? And what does it portend for American life?

The Rise of the Modern Tower

There is a direction to history and it is toward cooperation at larger scales. We see this trend in biological evolution, in the series of “major transitions” through which multicellular organisms first appeared and then developed new symbiotic relationships. We see it in cultural evolution too, as Robert Wright explained in his 1999 book, Nonzero: The Logic of Human Destiny. Wright showed that history involves a series of transitions, driven by rising population density plus new technologies (writing, roads, the printing press) that created new possibilities for mutually beneficial trade and learning. Zero-sum conflicts—such as the wars of religion that arose as the printing press spread heretical ideas across Europe—were better thought of as temporary setbacks, and sometimes even integral to progress. (Those wars of religion, he argued, made possible the transition to modern nation-states with better-informed citizens.) President Bill Clinton praised Nonzero’s optimistic portrayal of a more cooperative future thanks to continued technological advance.

The early internet of the 1990s, with its chat rooms, message boards, and email, exemplified the Nonzero thesis, as did the first wave of social-media platforms, which launched around 2003. Myspace, Friendster, and Facebook made it easy to connect with friends and strangers to talk about common interests, for free, and at a scale never before imaginable. By 2008, Facebook had emerged as the dominant platform, with more than 100 million monthly users, on its way to roughly 3 billion today. In the first decade of the new century, social media was widely believed to be a boon to democracy. What dictator could impose his will on an interconnected citizenry? What regime could build a wall to keep out the internet?

The high point of techno-democratic optimism was arguably 2011, a year that began with the Arab Spring and ended with the global Occupy movement. That is also when Google Translate became available on virtually all smartphones, so you could say that 2011 was the year that humanity rebuilt the Tower of Babel. We were closer than we had ever been to being “one people,” and we had effectively overcome the curse of division by language. For techno-democratic optimists, it seemed to be only the beginning of what humanity could do.

In February 2012, as he prepared to take Facebook public, Mark Zuckerberg reflected on those extraordinary times and set forth his plans. “Today, our society has reached another tipping point,” he wrote in a letter to investors. Facebook hoped “to rewire the way people spread and consume information.” By giving them “the power to share,” it would help them to “once again transform many of our core institutions and industries.”
In the 10 years since then, Zuckerberg did exactly what he said he would do. He did rewire the way we spread and consume information; he did transform our institutions, and he pushed us past the tipping point. It has not worked out as he expected.

Things Fall Apart

Historically, civilizations have relied on shared blood, gods, and enemies to counteract the tendency to split apart as they grow. But what is it that holds together large and diverse secular democracies such as the United States and India, or, for that matter, modern Britain and France?
Social scientists have identified at least three major forces that collectively bind together successful democracies: social capital (extensive social networks with high levels of trust), strong institutions, and shared stories. Social media has weakened all three. To see how, we must understand how social media changed over time—and especially in the several years following 2009.

In their early incarnations, platforms such as Myspace and Facebook were relatively harmless. They allowed users to create pages on which to post photos, family updates, and links to the mostly static pages of their friends and favorite bands. In this way, early social media can be seen as just another step in the long progression of technological improvements—from the Postal Service through the telephone to email and texting—that helped people achieve the eternal goal of maintaining their social ties.
But gradually, social-media users became more comfortable sharing intimate details of their lives with strangers and corporations. As I wrote in a 2019 Atlantic article with Tobias Rose-Stockwell, they became more adept at putting on performances and managing their personal brand—activities that might impress others but that do not deepen friendships in the way that a private phone conversation will.

Once social-media platforms had trained users to spend more time performing and less time connecting, the stage was set for the major transformation, which began in 2009: the intensification of viral dynamics.

Babel is not a story about tribalism. It’s a story about the fragmentation of everything.
Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom. This was often overwhelming in its volume, but it was an accurate reflection of what others were posting. That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers. Facebook soon copied that innovation with its own “Share” button, which became available to smartphone users in 2012. “Like” and “Share” buttons quickly became standard features of most other platforms.
Shortly after its “Like” button began to produce data about what best “engaged” its users, Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well. Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.

By 2013, social media had become a new game, with dynamics unlike those in 2008. If you were skillful or lucky, you might create a post that would “go viral” and make you “internet famous” for a few days. If you blundered, you could find yourself buried in hateful comments. Your posts rode to fame or ignominy based on the clicks of thousands of strangers, and you in turn contributed thousands of clicks to the game.
This new game encouraged dishonesty and mob dynamics: Users were guided not just by their true preferences but by their past experiences of reward and punishment, and their prediction of how others would react to each new action. One of the engineers at Twitter who had worked on the “Retweet” button later revealed that he regretted his contribution because it had made Twitter a nastier place. As he watched Twitter mobs forming through the use of the new tool, he thought to himself, “We might have just handed a 4-year-old a loaded weapon.”

As a social psychologist who studies emotion, morality, and politics, I saw this happening too. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking.

It was just this kind of twitchy and explosive spread of anger that James Madison had tried to protect us from as he was drafting the U.S. Constitution. The Framers of the Constitution were excellent social psychologists. They knew that democracy had an Achilles’ heel because it depended on the collective judgment of the people, and democratic communities are subject to “the turbulency and weakness of unruly passions.” The key to designing a sustainable republic, therefore, was to build in mechanisms to slow things down, cool passions, require compromise, and give leaders some insulation from the mania of the moment while still holding them accountable to the people periodically, on Election Day.

The tech companies that enhanced virality from 2009 to 2012 brought us deep into Madison’s nightmare. Many authors quote his comments in “Federalist No. 10” on the innate human proclivity toward “faction,” by which he meant our tendency to divide ourselves into teams or parties that are so inflamed with “mutual animosity” that they are “much more disposed to vex and oppress each other than to cooperate for their common good.”

But that essay continues on to a less quoted yet equally important insight, about democracy’s vulnerability to triviality. Madison notes that people are so prone to factionalism that “where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions and excite their most violent conflicts.”

Social media has both magnified and weaponized the frivolous. Is our democracy any healthier now that we’ve had Twitter brawls over Representative Alexandria Ocasio-Cortez’s Tax the Rich dress at the annual Met Gala, and Melania Trump’s dress at a 9/11 memorial event, which had stitching that kind of looked like a skyscraper? How about Senator Ted Cruz’s tweet criticizing Big Bird for tweeting about getting his COVID vaccine?

It’s not just the waste of time and scarce attention that matters; it’s the continual chipping-away of trust. An autocracy can deploy propaganda or use fear to motivate the behaviors it desires, but a democracy depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions. Blind and irrevocable trust in any particular individual or organization is never warranted. But when citizens lose trust in elected leaders, health authorities, the courts, the police, universities, and the integrity of elections, then every decision becomes contested; every election becomes a life-and-death struggle to save the country from the other side. The most recent Edelman Trust Barometer (an international measure of citizens’ trust in government, business, media, and nongovernmental organizations) showed stable and competent autocracies (China and the United Arab Emirates) at the top of the list, while contentious democracies such as the United States, the United Kingdom, Spain, and South Korea scored near the bottom (albeit above Russia).

Recent academic studies suggest that social media is indeed corrosive to trust in governments, news media, and people and institutions in general. A working paper that offers the most comprehensive review of the research, led by the social scientists Philipp Lorenz-Spreen and Lisa Oswald, concludes that “the large majority of reported associations between digital media use and trust appear to be detrimental for democracy.” The literature is complex—some studies show benefits, particularly in less developed democracies—but the review found that, on balance, social media amplifies political polarization; foments populism, especially right-wing populism; and is associated with the

mirror.png

spread of misinformation.

When people lose trust in institutions, they lose trust in the stories told by those institutions. That’s particularly true of the institutions entrusted with the education of children. History curricula have often caused political controversy, but Facebook and Twitter make it possible for parents to become outraged every day over a new snippet from their children’s history lessons––and math lessons and literature selections, and any new pedagogical shifts anywhere in the country. The motives of teachers and administrators come into question, and overreaching laws or curricular reforms sometimes follow, dumbing down education and reducing trust in it further. One result is that young people educated in the post-Babel era are less likely to arrive at a coherent story of who we are as a people, and less likely to share any such story with those who attended different schools or who were educated in a different decade.

The former CIA analyst Martin Gurri predicted these fracturing effects in his 2014 book, The Revolt of the Public. Gurri’s analysis focused on the authority-subverting effects of information’s exponential growth, beginning with the internet in the 1990s. Writing nearly a decade ago, Gurri could already see the power of social media as a universal solvent, breaking down bonds and weakening institutions everywhere it reached. He noted that distributed networks “can protest and overthrow, but never govern.” He described the nihilism of the many protest movements of 2011 that organized mostly online and that, like Occupy Wall Street, demanded the destruction of existing institutions without offering an alternative vision of the future or an organization that could bring it about.

Gurri is no fan of elites or of centralized authority, but he notes a constructive feature of the pre-digital era: a single “mass audience,” all consuming the same content, as if they were all looking into the same gigantic mirror at the reflection of their own society. In a comment to Vox that recalls the first post-Babel diaspora, he said:
The digital revolution has shattered that mirror, and now the public inhabits those broken pieces of glass. So the public isn’t one thing; it’s highly fragmented, and it’s basically mutually hostile. It’s mostly people yelling at each other and living in bubbles of one sort or another.

Mark Zuckerberg may not have wished for any of that. But by rewiring everything in a headlong rush for growth—with a naive conception of human psychology, little understanding of the intricacy of institutions, and no concern for external costs imposed on society—Facebook, Twitter, YouTube, and a few other large platforms unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.

I think we can date the fall of the tower to the years between 2011 (Gurri’s focal year of “nihilistic” protests) and 2015, a year marked by the “great awokening” on the left and the ascendancy of Donald Trump on the right. Trump did not destroy the tower; he merely exploited its fall. He was the first politician to master the new dynamics of the post-Babel era, in which outrage is the key to virality, stage performance crushes competence, Twitter can overpower all the newspapers in the country, and stories cannot be shared (or at least trusted) across more than a few adjacent fragments—so truth cannot achieve widespread adherence.

The many analysts, including me, who had argued that Trump could not win the general election were relying on pre-Babel intuitions, which said that scandals such as the Access Hollywood tape (in which Trump boasted about committing sexual assault) are fatal to a presidential campaign. But after Babel, nothing really means anything anymore––at least not in a way that is durable and on which people widely agree.

for the complete article, please follow the link

Picked topics
Donald Knuth wisdom
bvermeulen bvermeulen - posts: 146
Updated 2 years, 7 months ago by bvermeulen

Donald Knuth on work habits, problem solving, and happiness

Donald Knuth

Shuvomoy Das Gupta

April 13, 2020

Recently, I came across a few old and new interviews of Donald Knuth, where he sheds light on his work habits, how he approaches problems, and his philosophy towards happiness. I really enjoyed reading the interviews. In this blog, I am recording his thoughts on approaching a problem, organizing daily activities, and the pursuit of happiness.

Seeing both the forest and the trees in research. “I’ve seen many graduate students working on their theses, over the years, and their research often follows a pattern that supports what I’m trying to explain. Suppose you want to solve a complicated problem whose solution is unknown; in essence you’re an explorer entering into a new world. At first your brain is learning the territory, and you’re making tiny steps, baby steps in the world of the problem. But after you’ve immersed yourself in that problem for awhile then you can start to make giant steps, bigger steps, and you can see many things at once, so your brain is getting ready for a new kind of work. You begin to see both the forest and the trees.”

How Knuth works on a project. “When I start to investigate some topic, during the first days I fill up scratch paper like mad. I mean, I have a huge pile of paper at home, paper that’s half-used, used on only one side; I’ve kept a lot of partially printed sheets instead of throwing them away, so that I can write on the back sides. And I’ll use up 20 sheets or more per hour when I’m exploring a problem, especially at the beginning. For the first hour I’m trying all kinds of stuff and looking for patterns. Later, after internalizing those calculations or drawings or whatever they are, I don’t have to write quite so much down, and I’m getting closer to a solution. The best test of when I’m about ready to solve a problem is whether or not I can think about it sensibly while swimming, without any paper or notes to help out. Because my mind is getting accustomed to the territory, and finally I can see what might possibly lead to the end. That’s oversimplifying the truth a little bit, but the main idea is that, with all my students, I’ve noticed that they get into a mental state where they’ve become more familiar with a certain problem area than anybody else in the world.”

Visualizers vs Symbolizers. Well, you know, I’m visualizing the symbols. To me, the symbols are reality, in a way. I take a mathematical problem, I translate it into formulas, and then the formulas are the reality. I know how to transform one formula into another. That should be the subtitle of my book Concrete Mathematics: How to Manipulate Formulas. I’d like to talk about that a little.

I have a feeling that a lot of the brightest students don’t go into mathematics because–-curious thing–-they don’t need algebra at the level I did. I don’t think I was smarter than the other people in my class, but I learned algebra first. A lot of very bright students today don’t see any need for algebra. They see a problem, say, the sum of two numbers is 100 and the difference is 20, they just sort of say, “Oh, 60 and 40.” They’re so smart they don’t need algebra. They go on seeing lots of problems and they can just do them, without knowing how they do it, particularly. Then finally they get to a harder problem, where the only way to solve it is with algebra. But by that time, they haven’t learned the fundamental ideas of algebra. The fact that they were so smart prevented them from learning this important crutch that I think turned out to be important for the way I approach a problem. Then they say, “Oh, I can’t do math.” They do very well as biologists, doctors and lawyers.

What graduate students should do when they have expertise in a certain area. “When they [the students] reach this point [expertise in a certain area] I always tell them that now they have a responsibility to the rest of us. Namely, after they have solved their thesis problem and trained their brain for this problem area, they should look around for other, similar problems that require the same expertise. They should use their expertise now, while they have this unique ability, because they’re going to lose it in a month. I emphasize that they shouldn’t be satisfied with solving only one problem; they should also be thinking about other interesting problems that could be handled with the same methods.”

On the importance of anthropomorphizing a problem. “Another aspect of role playing is considerably more important: We can often make advances by anthropomorphizing a problem, by saying that certain of its aspects are “bad guys” and others are “good guys,” or that parts of a system are “talking to each other.” This approach is helpful because our language has lots of words for human relationships, so we can bring more machinery to bear on what we’re thinking about.”

Why putting the discovery of a solution on paper is important. “Well, I have no sympathy with people who never write up an answer; it’s selfish to keep beautiful discoveries a secret. But I can understand a reluctance to write something up when another problem has already grabbed your attention. I used to have three or four papers always in sort of a pipeline, waiting for their ideas to mature before I would finally prepare them for publication.

Frances Yao once described the situation very nicely. She said, you work very hard on a problem for a long time, and then you get this rush, this wonderful satisfaction when you’ve solved it. That lasts about an hour. And then you think of another problem, and you’re consumed with curiosity about the answer to that new one. Again, your life isn’t happy until you find the next answer.”

The philosophy behind seeking solutions. “The process of seeking solutions is certainly a big part of a researcher’s life, but really it’s in everybody’s life. I don’t want to get deep into philosophy, but the book of Ecclesiastes in the Bible says essentially this:

Life is hard and then you die. You can, however, enjoy the process of living; don't worry about the fact that you're going to die. Some bad people have a good life, and some good people have a bad life, and that doesn't seem fair; but don't worry about that either. Just think about ways of enjoying the journey.

Again I’m oversimplifying, but that’s the message I find in many parts of the Bible. For example, it turns up in Philippians 3:16, where the writer says that:

You don't race to get to the goal; the process of racing itself, of keeping the pace, is the real goal.

When I go on vacation, I like to enjoy the drive.

In Christian churches I am least impressed by a sermon that talks about how marvelous heaven is going to be at the end. To me that’s not the message of Christianity. The message is about how to live now, not that we should live in some particular way because there’s going to be pie in the sky some day. The end means almost nothing to me. I am glad it’s there, but I don’t see it as much of a motivating force, if any. I mean, it’s the journey that’s important.”

Knuth’s process of reading papers. “It turns out that I read everything at the same slow rate, whether I’m looking at light fiction or at highly technical papers. When I browse through a journal, the titles and abstracts of papers usually don’t help me much, because they emphasize results rather than methods; therefore I generally go through page by page, looking at the illustrations, also looking for equations that are somehow familiar or for indications of useful techniques that are unfamiliar.

Usually a paper lies outside the scope of my books, because I’ve promised to write about only a rather small part of the entire field of computer science. In such cases there’s nothing new for me to worry about, and I happily turn the pages, zipping to the end. But when I do find a potentially relevant paper, I generally read it only partway, only until I know where it fits into the table of contents of The Art of Computer Programming. Then I make myself a note, to read it later when I’m writing up that section. Sometimes, however—as happened last night with that paper about scheduling games of bridge—I get hooked on some question and try to explore it before I’m ready to move on to reading any other papers.

Eventually when I do begin to write a section of my book, I go into “batch mode” and read all of the literature for which my files point to that section, as well as all of the papers that those papers cite. I save considerable time by reading several dozen papers on the same topic all in the same week, rather than reading them one by one as they come out and trying to keep infinitely many things in my head all at once.

When I finally do get into batch mode, I go very carefully through the first two or three papers, trying to work the concepts out in my own mind and to anticipate what the authors are going to say before turning each page. I usually fail to guess what the next page holds, but the fact that I’ve tried and failed makes me more ready to understand why the authors chose the paths that they did. Frequently I’ll also write little computer programs at this point, so that the ideas solidify in my head. Then, once I’ve gone slowly through the first few papers that I’ve accumulated about some topic, I can usually breeze through the others at a comparatively high speed. It’s like the process of starting with baby steps and progressing to giant steps that I described earlier.”

On parts of research that are much less fun. “Well, some parts of a job are always much less fun than others. But I’ve learned to grin and bear it, to bite the bullet and move on, to face the music, to take it in stride and make a virtue of necessity. (Excuse me for using so many clichés, but the number of different popular expressions tends to make my point.)”

On scheduling daily activities. “I schedule my activities in a somewhat peculiar way. Every day I look at the things that I’m ready to do, and choose the one that I like the least, the one that’s least fun — the task that I would most like to procrastinate from doing, but for which I have no good reason for procrastination. This scheduling rule is paradoxical because you might think that I’m never enjoying my work at all; but precisely the opposite is the case, because I like to finish a project. It feels good to know that I’ve gotten through the hurdles.”

My scheduling principle is to do the thing I hate most on my to-do list.

On pursuing a PhD. “A PhD is awarded for research, meaning that the student has contributed to the state of the world’s knowledge. That’s quite different from a bachelor’s degree or a master’s degree; those degrees are awarded for a mastery of existing knowledge. (In some non-science fields, like Art, a master’s degree is more akin to a PhD; but I’m speaking now about the situation in mathematics and in the sciences.) My point is that it’s a mistake to think of a PhD as a sort of next step after a BS or MS degree, like advancing further in some academic straight line. A PhD diploma is another animal entirely; it stands for a quite different kind of talent, which is orthogonal to one’s ability to ace an examination. A lot of people who are extremely bright, with straight A+ grades as undergraduates, never get a PhD. They’re smart in a way that’s different from “research smart.” I think of my parents, for example: I don’t believe either one of them would have been a good PhD candidate, although both were extremely intelligent.

It’s extremely misleading to rank people on an IQ scale with the idea that the smarter they are, the more suitable they are for a PhD degree; that’s not it at all. People have talents in different dimensions, and a talent for research might even have a negative correlation with the ability to tie your own shoes.”

Whether volunteering helps Knuth with his principal vocation. “Well, you’re absolutely right. I can’t do technical stuff all the time. I’ve found that I can write only a certain number of pages a day before running out of steam. When I reach this maximum number, I have no more ideas that day. So certainly within a 24-hour period, not all of it is going to be equally creative. Working in the garden, pulling weeds and so on, is a good respite. I recently got together with some friends at Second Harvest, repackaging food from one place to another. This kind of activity, using my hands, provides variety and doesn’t really take away from the things I can do for the world.”

On unhappiness. “I mean, if you didn’t worry, and if you didn’t go through some spells and crises, then you’d be missing a part of life. Even though such things aren’t pleasant when you’re doing them, they are the defining experiences — things to be glad about in retrospect because they happened. Otherwise you might be guilty of not feeling guilty!

On the other hand I’ve noticed in myself that there were times when my body was telling me to be unhappy, yet I sometimes couldn’t readily figure out a reason for any unhappiness. I knew that I was feeling “down,” but sometimes I had to go back several months to recall anything that anybody had said to me that might still be making me feel bad. One day, when I realized how hard it was to find any reason for my current unhappiness, I thought, “Wait a minute. I bet this unhappiness is really something chemical, not actually caused by circumstances.*” I began to speculate that my body was programmed to be unhappy a certain percentage of the time, and that hormones or something were the real reason behind moments of mild depression.”

Why power corrupts. “When people have more power and they get richer, and they find themselves rich but still unhappy, they think, “Hmmm, I’ll be happy if I only get rid of all the sources of my unhappiness.” But the action of removing annoyances sometimes involves abusing their power. I could go on and on in this vein, I guess, because you find that in the countries where there is a great difference between rich and poor, the rich people have their problems, too. They haven’t any motivation to change the way they’re living, exploiting others, because as far as they can see, their own life isn’t that happy. But if they would only realize that their unhappy spells are part of the way that they’re made, and basically normal, they wouldn’t make the mistake of blaming somebody else and trying to get even for imagined misdeeds.”

Point eight is enough. “In fact I’ve concluded that it’s really a good thing for people not to be 100% happy. I’ve started to live in accordance with a philosophy that can be summed up in the phrase “Point eight is enough,” meaning “0.8 is enough.”

You might remember the TV show from the 70s called “Eight is Enough,” about a family with eight children. That’s the source of my new motto. I don’t know that 0.8 is the right number, but I do believe that when I’m not feeling 100% happy, I shouldn’t feel guilty or angry, or think that anything unusual is occurring. I shouldn’t set 100% as the norm, without which there must be something wrong. Instead, I might just as well wait a little while, and I’ll feel better. I won’t make any important decisions about my life at a time when I’m feeling less than normally good.

In a sense I tend now to suspect that it was necessary to leave the Garden of Eden. Imagine a world where people are in a state of euphoria all the time — being high on heroin, say. They’d have no incentive to do anything. What would get done? What would happen? The whole world would soon collapse. It seems like intelligent design when everybody’s set point is somewhere less than 100%.”

High minimum more important than high maximum. “I try to do a good job at whatever I’m doing, because it’s more fun to do a good job than not. And when there’s a choice between different things to spend time on, I try to look for things that will maximize the benefit without making me burn out.

For example, when I was working on the TeX project during the early 80s, hardly anybody saw me when I was sweeping the floor, mopping up the messes and carrying buckets of waste from the darkroom, cleaning the machines, and doing other such stuff. I did those things because I wouldn’t have dared to ask graduate students to do menial tasks that were beneath them.

I know that every large project has some things that are much less fun than others; so I can get through the tedium, the sweeping or whatever else needs to be done. I just do it and get it over with, instead of wasting time figuring out how not to do it. I learned that from my parents. My mother is amazing to watch because she doesn’t do anything efficiently, really: She puts about three times as much energy as necessary into everything she does. But she never spends any time wondering what to do next or how to optimize anything; she just keeps working. Her strategy, slightly simplified, is, “See something that needs to be done and do it.” All day long. And at the end of the day, she’s accomplished a huge amount.

Putting this another way, I think that the limiting thing — the thing that determines a person’s success in life — is not so much what they do best, but what they do worst. I mean, if you rate every aspect of what someone does, considering everything that goes into a task, a high minimum is much more important than a high maximum. The TeX project was successful in large part because I quietly did things like mop the floor. The secret of any success that I’ve had, similarly, is that in all the projects I’ve worked on, the weakest link in my chain of abilities was still reasonably strong.”

A person’s success in life is determined by having a high minimum, not a high maximum. If you can do something really well but there are other things at which you’re failing, the latter will hold you back. But if almost everything you do is up there, then you’ve got a good life. And so I try to learn how to get through things that others find unpleasant.

A guiding heuristic. “Don’t just do trendy stuff. If something is really popular, I tend to think: back off. I tell myself and my students to go with your own aesthetics, what you think is important. Don’t do what you think other people think you want to do, but what you really want to do yourself. That’s been a guiding heuristic for me all the way through.”

Source of humility. “I wrote a couple of books, including Things a Computer Scientist Rarely Talks About, that are about theology — things you can’t prove — rather than mathematics or computer science. My life would not be complete if it was all about cut and dried things. The mystical things I don’t understand give me humility. There are things beyond my understanding.

In mathematics, I know when a theorem is correct. I like that. But I wouldn’t have much of a life if everything were doable. This knowledge doesn’t tear me apart. Rather, it ensures I don’t get stuck in a rut.”

Meaning of life. “I personally think of my belief that God exists although I have no idea what that means. But I believe that there is something beyond human capabilities and it might be some AI. Whatever it is, I do believe that there is something that goes beyond human understanding but that I can try to learn more about how to resonate with whatever that being would like me to do. I strive for that (occasional glimpses of that being) not that I ever think I am going to get close to it. I try to imagine that I am following somebody’s wishes and this AI or whatever it is, it is smart enough to give me clues.”

Picked topics
Joris Luyendijk over Liefdeswetten
bvermeulen bvermeulen - posts: 146
Updated 4 years, 2 months ago by bvermeulen

In de tweede aflevering gaat Nadia Benaissa in gesprek met Joris Luyendijk. Ze blikken terug op het schrijfproces en hoe hun boeken in relatie staan tot de maatschappij.

Picked topics
Dijkstra quotes
bvermeulen bvermeulen - posts: 146
Updated 4 years, 2 months ago by bvermeulen

Dijkstra often expressed his opinions using memorable turns of phrase or maxims that caught the ears of his colleagues and were widely commented upon. Here are some examples:

  • Program testing can be used to show the presence of bugs, but never to show their absence.
  • Computer science is no more about computers than astronomy is about telescopes.
  • The question of whether machines can think is about as relevant as the question of whether submarines can swim.
  • A formula is worth a thousand pictures.

In one of his EWDs, Dijkstra collected several jibes about programming languages, such as: “The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offense.”60 At the time, COBOL was one of the most widely used programming languages and these comments were not warmly received.

Picked topics
Belgium Congo
bvermeulen bvermeulen - posts: 146
Updated 4 years, 2 months ago by bvermeulen

Congo, some Belgium cruelty …
King Leopold

Picked topics
Politics
bvermeulen bvermeulen - posts: 146
Updated 4 years, 7 months ago by bvermeulen

British Writer Pens The Best Description Of Trump I’ve Read

“Why do some British people not like Donald Trump?” Nate White, an articulate and witty writer from England wrote the following response:

A few things spring to mind. Trump lacks certain qualities which the British traditionally esteem. For instance, he has no class, no charm, no coolness, no credibility, no compassion, no wit, no warmth, no wisdom, no subtlety, no sensitivity, no self-awareness, no humility, no honour and no grace – all qualities, funnily enough, with which his predecessor Mr. Obama was generously blessed. So for us, the stark contrast does rather throw Trump’s limitations into embarrassingly sharp relief.

Plus, we like a laugh. And while Trump may be laughable, he has never once said anything wry, witty or even faintly amusing – not once, ever. I don’t say that rhetorically, I mean it quite literally: not once, not ever. And that fact is particularly disturbing to the British sensibility – for us, to lack humour is almost inhuman. But with Trump, it’s a fact. He doesn’t even seem to understand what a joke is – his idea of a joke is a crass comment, an illiterate insult, a casual act of cruelty.

Trump is a troll. And like all trolls, he is never funny and he never laughs; he only crows or jeers. And scarily, he doesn’t just talk in crude, witless insults – he actually thinks in them. His mind is a simple bot-like algorithm of petty prejudices and knee-jerk nastiness.

There is never any under-layer of irony, complexity, nuance or depth. It’s all surface. Some Americans might see this as refreshingly upfront. Well, we don’t. We see it as having no inner world, no soul. And in Britain we traditionally side with David, not Goliath. All our heroes are plucky underdogs: Robin Hood, Dick Whittington, Oliver Twist. Trump is neither plucky, nor an underdog. He is the exact opposite of that. He’s not even a spoiled rich-boy, or a greedy fat-cat. He’s more a fat white slug. A Jabba the Hutt of privilege.

And worse, he is that most unforgivable of all things to the British: a bully. That is, except when he is among bullies; then he suddenly transforms into a snivelling sidekick instead. There are unspoken rules to this stuff – the Queensberry rules of basic decency – and he breaks them all. He punches downwards – which a gentleman should, would, could never do – and every blow he aims is below the belt. He particularly likes to kick the vulnerable or voiceless – and he kicks them when they are down.

So the fact that a significant minority – perhaps a third – of Americans look at what he does, listen to what he says, and then think ‘Yeah, he seems like my kind of guy’ is a matter of some confusion and no little distress to British people, given that:

  • Americans are supposed to be nicer than us, and mostly are.

  • You don’t need a particularly keen eye for detail to spot a few flaws in the man.

This last point is what especially confuses and dismays British people, and many other people too; his faults seem pretty bloody hard to miss. After all, it’s impossible to read a single tweet, or hear him speak a sentence or two, without staring deep into the abyss. He turns being artless into an art form; he is a Picasso of pettiness; a Shakespeare of shit. His faults are fractal: even his flaws have flaws, and so on ad infinitum. God knows there have always been stupid people in the world, and plenty of nasty people too. But rarely has stupidity been so nasty, or nastiness so stupid. He makes Nixon look trustworthy and George W look smart. In fact, if Frankenstein decided to make a monster assembled entirely from human flaws – he would make a Trump.

And a remorseful Doctor Frankenstein would clutch out big clumpfuls of hair and scream in anguish: ‘My God… what… have… I… created?’ If being a twat was a TV show, Trump would be the boxed set.

https://coming42.livejournal.com/479179.html

Picked topics
Pricing Your Product (cont'd)
bvermeulen bvermeulen - posts: 146
Updated 4 years, 8 months ago by bvermeulen

https://www.sequoiacap.com/article/pricing-your-product/

Decoy pricing
The Economist magazine once offered three subscription packages: an online one for $59; a print one for $125; and a combined print and online subscription also for $125.

The ad caught the eye of a professor, who asked 100 of his students which subscription they would choose. Eighty-four chose the combo and 16 chose the online only. No one chose the print only subscription.

But when the print-only option was eliminated and students were just given a choice between the $59 online subscription and the $125 combined one, 68 chose the cheaper option.

The print-only subscription doesn’t have a lot of value as a package. But it influences the way customers make snap judgments.

These “decoy” packages make other—often more expensive—ones look good by providing a clearly inferior choice. There’s no obvious way to determine whether the online subscription or print-and-online combination is a better value. But compared with the print-only one, the combo is clearly a better deal. The reference point makes people more inclined to pick it.

Similarly, a company may use a decoy to make an expensive product look affordable. A common tactic is enterprise software that costs, say, $500 a month for up to 10 users, $1,000 a month for up to 25 users but just $1,200 for unlimited users.

Tip: People tend to overvalue things they already have, a pattern known as the endowment effect. This is something that enterprise companies should be particularly aware of. It’s going to take an extra effort to get a customer to rip out something they already have even if what you’re selling is demonstrably better. That’s one reason why it’s easier to sell to a greenfield customer than to win one away from a competitor.

Developing your pricing hypothesis
The following worksheet can help you assess your product’s perceived value and the accuracy of its price.

In field (1) write down the things that people will think about when they first encounter your product. Use the left side for things someone might use in place of your product and the right side for things they’d likely use along with it.

In field (2) write down the intuitive snap judgments someone will make about your product and what they’ll conclude after a more rigorous analysis.

Field (3) visualizes your product’s perceived value, which should be heavily influenced by substitutes and complements. If your product replaces something that costs $200 it’s perceived value likely won’t be any higher than that.

Field (4) will help you identify how broad of a market you are targeting.

As you proceed with your pricing strategy, routinely remind yourself that your customers are analytical, but prone to leaps of logic; that they want bargains, but often base them on arbitrary reference points. Above all, they don’t want to feel like they’re on the hook if they make the wrong choices.

If you can manage these desires while providing a product that customers are eager to pay for, you’ll be on your way to building an enduring business.

Pricing worksheet

Picked topics
Pricing Your Product
bvermeulen bvermeulen - posts: 146
Updated 4 years, 8 months ago by bvermeulen

https://www.sequoiacap.com/article/pricing-your-product/

Pricing Your Product

A lot of startups treat pricing as a math problem or, worse, an afterthought. Pricing is as much an art as it is a science, one that relies as much on marketing and psychology as it does on classical economics.

This Sequoia Guide covers strategies that can help you figure out the right price for your product—and end up with happier customers and more profit in the process.

The Sequoia Guide to Pricing
LinkedIn’s decision to package some seldom-used features as high-margin “premium” accounts spawned a business line that now makes almost $250 million a year. At eBay, touting the benefits of a low-cost tool meant the difference between profitability and a loss.

Meanwhile, companies that didn’t properly assess the value of their products and price them accordingly struggled or fizzled out.

Setting a price for a product is one of the most important decisions a company can make. But all too often it’s treated as an afterthought. Startups in particular have a habit of setting their price low to attract customers and never raising it, or keeping a feature free long after it’s clear people will pay.

“If you picked your price once and never changed it, it’s probably wrong,” says Phil Libin, chief executive of Evernote.

A more thoughtful approach to pricing can boost your company’s profits, increase customer satisfaction and help you discover popular product variations that you hadn’t considered.

Getting started
In theory, setting a price should be a rational economics problem. You have a set supply of a product and there’s a certain level of demand for it in the marketplace. Since demand tends to increase as prices go down, you simply adjust your price until you’ve maximized profits.

Reality is more complicated. Technology companies usually don’t have a finite supply of a product. And while you may spend a lot to develop software or a mobile service, over time the cost to produce additional units approaches zero.

Furthermore, many startups have a new product for which there aren’t competitors for customers to benchmark against.

Under these circumstances “the traditional model starts to behave in weird ways,” say Michael Dearing, a professor at Stanford University’s design school, who ran pricing at eBay for many years.

In order to set a price, you’ll need to form a hypothesis. You can A/B test it and use other analytics to refine it. But don’t rely on data alone to inform your decisions. Also take into account input from your customers and employees, what the competition is doing and your intuition.

“Pricing is not a math problem,” says Dearing. “It’s a judgment problem.”

Increase perceived value
Usually, companies fixate on the gap between how much their products cost to make and how much they charge for them. But you should also focus on the gap between your price and how much value customers think it delivers, a concept known as perceived value.

Companies often assume that if sales are slow they need to cut prices. But more often, Dearing says, “If nobody’s buying my product, it’s because the gap between price and perceived value either doesn’t exist or it’s not large enough.”

Evernote is trying to measure that gap. The company’s Premium accounts currently cost $5 a month. Libin recently started testing Evernote’s price in some countries to find out whether that’s cheap or expensive relative to perceived value.

“It’s possible that in some countries, like India or China, $5 a month is too expensive,” Libin says. “For the U.S. or Japan it may be that $10 a month is still cheap.”

You can increase perceived value with better marketing. EBay, for instance, offered a feature from its inception that for 25 cents allowed people who sell products on the site to add a photo next to their listings. It wasn’t used much, Dearing says.

But it turned out that sellers who included the pictures had much higher click rates and tended to command a higher price for their goods. EBay started to market this data along with the feature.

With the benefit of the sales data, eBay’s sellers saw that the pictures helped solve a problem and their perceived value skyrocketed.

Because it didn’t cost eBay 25 cents to host a photograph, the feature, along with other optional upgrades, eventually generated hundreds of millions a year in pure profits, Dearing says.

Let your price tell a story
The price you set for a product also influences its perceived value. That’s why people assume that a $50 bottle of wine is better than a $10 one.

In that sense, price can serve as a proxy for quality.

Natera recently brought to market a non-invasive pre-natal test that can detect Down syndrome and other conditions in a mother’s blood. Previously, testing for these conditions required a risky procedure that extracted tissue from the fetus. Other non-invasive tests aren’t as comprehensive.

Because Natera’s test is better than its competitors’ products, the company charges more.

“Premium pricing communicates a premium product,” says Matthew Rabinowitz, the company’s CEO.

Tip: Where in the shopping process you display your price can make a big difference. In some cases, such as a takeout menu, waiting until after a customer has already decided to buy your product may allow you to charge more. In others, like a hotel room, too much opaqueness can frustrate customers.

One way to expand your customer base is to offer multiple products at similar price points, catering to a range of tastes. This is known as horizontal assortment. The iPhone 5c, which comes in five colors, is a good example.

Another approach is vertical assortment, offering versions at multiple price points. While your most expensive model represents what your brand aspires to, customers will value features differently and some who don’t see the value in that high-end version might be willing to pay less for a stripped-down model.

Charging different prices for iPhones with different amounts of storage increases the addressable market for the product with minimal additional cost. Software bundles that come with a maximum number of users or different tiers of customer service accomplish the same thing.

No matter how much research you do, you’ll never know for sure what customers want. In addition to offering a variety of products levels, it’s good to allow them to add features a la carte.

By letting customers create their own packages you get real-time feedback about price and product configurations.

Tip: Too much choice can be overwhelming. People would rather buy nothing than choose the wrong option. Similarly, variable pricing that slides upwards with more usage can scare potential customers. They’ll often walk away if they can’t easily figure out the right product to buy or if they’re forced to make projections about future costs.