Showing posts with label prediction. Show all posts
Showing posts with label prediction. Show all posts

Wednesday, August 28, 2024

In which I declare my expert judgment on AI 2024

These days my social media experience is largely Mastodon. There's something to be said about a social network that's so irreparably geeky and so hard to work with that only a tiny slice of humanity can possibly participate (unless and until Threads integration actually works).

In my Mastodon corner of the "Fediverse', among the elite pundits I choose to read,  there's a vocal cohort that is firm in their conviction that "AI" hype is truly and entirely hype, and that the very term "AI" should not be used. That group would say that the main application of LLM technology is criming.

Based on my casual polling of my pundits there's a quieter cohort that is less confident. That group is anxious, but not only about the criming.

Somewhere, I am told, there is a third group that believes that godlike-AIs are coming in 2025. They may be mostly on Musk's network.

Over the past few months I think the discourse has shifted. The skeptics are less confident, and the godlike-AI cohort is likewise quieter as LLM based AI hits technical limits. 

The shifting discourse, and especially the apparent LLM technical limitations, mean I'm back to being in the murky middle of things. Where I usually sit. Somehow that compels me to write down what I think. Not because anyone will or should care [1], but because I write these posts mostly for myself and I like to look back and see how wrong I've been.

So, in Aug 2024, I think:
  1. I am less worried that the end of the world is around the corner. If we'd gotten one more qualitative advance in LLM or some other AI tech I'd be researching places to (hopelessly) run to.
  2. Every day I think of new things I would do if current LLM tech had access to my data and to common net services. These things don't require any fundamental advances but they do require ongoing iteration.  I don't have much confidence in Apple's capabilities any more, but maybe they can squeeze this out. I really, really, don't want to have to depend on Microsoft. Much less Google.
  3. Perplexity.ai is super valuable to me now and I'd pay up if they stopped giving it away. It's an order of magnitude better than Google search.
  4. The opportunities for crime are indeed immense. They may be part of what ends unmediated net access for most people. By far the best description of this world is a relatively minor subplot in Neal Stephenson's otherwise mixed 2018 novel "Fall".
  5. We seem to be replaying the 1995 dot com crash but faster and incrementally. That was a formative time in my life. It was a time when all the net hype was shown to be .... correct. Even as many lost their assets buying the losers.
  6. It will all be immensely stressful and disruptive and anxiety inducing even though we won't be doing godlike-AI for at least (phew) five more years.
  7. Many who are skeptical about the impact of our current technologies have a good understanding of LLM tech but a weak understanding of cognitive science. Humans are not as magical as they think.
- fn -

[1] I legitimately have deeper expertise here than most would imagine but it's ancient and esoteric.

Monday, February 20, 2023

Be afraid of ChatGPT

TL;DR: It's not that ChatGPT is miraculous, it's that cognitive science research suggests human cognition is also not miraculous.

"Those early airplanes were nothing compared to our pigeon-powered flight technology!"

https://chat.openai.com/chat - "Write a funny but profound sentence about what pigeons thought of early airplanes"

Relax

Be Afraid

ChatGPT is just a fancy autocomplete.

Much of human language generation may be a fancy autocomplete.

ChatGPT confabulates.

Humans with cognitive disabilities routinely confabulate and under enough stress most humans will confabulate. 

ChatGPT can’t do arithmetic.

IF a monitoring system can detect a question involves arithmetic or mathematics it can invoke a math system*.


UPDATE: 2 hours after writing this I read that this has been done.

ChatGPT’s knowledge base is faulty.

ChatGPT’s knowledge base is vastly larger than that of most humans and it will quickly improve.

ChatGPT doesn’t have explicit goals other than a design goal to emulate human interaction.

Other goals can be implemented.

We don’t know how to emulate the integration layer humans use to coordinate input from disparate neural networks and negotiate conflicts.

*I don't know the status of such an integration layer. It may already have been built. If not it may take years or decades -- but probably not many decades.

We can’t even get AI to drive a car, so we shouldn’t worry about this.

It’s likely that driving a car basically requires near-human cognitive abilities. The car test isn’t reassuring.

ChatGPT isn’t conscious.

Are you conscious? Tell me what consciousness is.

ChatGPT doesn’t have a soul.

Show me your soul.

Relax - I'm bad at predictions. In 1945 I would have said it was impossible, barring celestial intervention, for humanity to go 75 years without nuclear war.


See also:

  • All posts tagged as skynet
  • Scott Aaronson and the case against strong AI (2008). At that time Aaronson felt a sentient AI was sometime after 2100. Fifteen years later (Jan 2023) Scott is working for OpenAI (ChatGPT). Emphases mine: "I’m now working at one of the world’s leading AI companies ... that company has already created GPT, an AI with a good fraction of the fantastical verbal abilities shown by M3GAN in the movie ... that AI will gain many of the remaining abilities in years rather than decades, and .. my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world."
  • Imagining the Singularity - in 1965 (2009 post.  Mathematician I.J. Good warned of an "intelligence explosion" in 1965. "Irving John ("I.J."; "Jack") Good (9 December 1916 – 5 April 2009)[1][2] was a British statistician who worked as a cryptologist at Bletchley Park."
  • The Thoughtful Slime Mold (2008). We don't fly like bird's fly.
  • Fermi Paradox resolutions (2000)
  • Against superhuman AI: in 2019 I felt reassured.
  • Mass disability (2012) - what happens as more work is done best by non-humans. This post mentions Clark Goble, an app.net conservative I miss quite often. He died young.
  • Phishing with the post-Turing avatar (2010). I was thinking 2050 but now 2025 is more likely.
  • Rat brain flies plane (2004). I've often wondered what happened to that work.
  • Cat brain simulator (2009). "I used to say that the day we had a computer roughly as smart as a hamster would be a good day to take the family on the holiday you've always dreamed of."
  • Slouching towards Skynet (2007). Theories on the evolution of cognition often involve aspects of deception including detection and deceit.
  • IEEE Singularity Issue (2008). Widespread mockery of the Singularity idea followed.
  • Bill Joy - Why the Future Doesn't Need Us (2000). See also Wikipedia summary. I'd love to see him revisit this essay but, again, he was widely mocked.
  • Google AI in 2030? (2007) A 2007 prediction by Peter Norvig that we'd have strong AI around 2030. That ... is looking possible.
  • Google's IQ boost (2009) Not directly related to this topic but reassurance that I'm bad at prediction. Google went to shit after 2009.
  • Skynet cometh (2009). Humor.
  • Personal note - in 1979 or so John Hopfield excitedly described his work in neural networks to me. My memory is poor but I think we were outdoors at the Caltech campus. I have no recollection of why we were speaking, maybe I'd attended a talk of his. A few weeks later I incorporated his explanations into a Caltech class I taught to local high school students on Saturday mornings. Hopfield would be about 90 if he's still alive. If he's avoided dementia it would be interesting to ask him what he thinks.

Sunday, July 24, 2022

Putting down a marker on post-COVID encephalopathy (PCE)

I generally have opinions on things even in the absence of science or data. They are often wrong. Even so, for my own future amusement, here's my take on fatigue/cognitive symptoms persisting months after a COVID infection:
  • I think direct post-viral fatigue, including post-COVID is in the head. Specifically, in brain tissues. Something along the lines of an encephalitis or MS -- encephalopathy is probably the best term. A persistent inflammatory condition related to immune dysfunction or persistent infection by something (like reactivated latent viruses, COVID, etc).
  • It's very hard to separate post-viral neuronal dysfunction from anxiety, depression, ongoing dementing processes, coincident head injuries, coincident brain disorders, sleep disorders and the like. It's all in the brain after all. (These aren’t exclusive conditions, so some unlucky person must get all of them at once. Heck, for all we know depression is partly a postviral damage disorder.) We need better tech -- maybe a combination of anatomic and functional brain imaging will help one day. Maybe it will be something we can diagnoses between MRI and lumbar puncture/CSF samples.
  • I think one day we'll find post-COVID encephalopathy (PCE I'll call it) occurs in less than 1 in 500 ever-infected people and in most it improves over 3m to 1y.  In most, but we now believe MS is an infrequent or rare sequelae of Epstein-Barr infection. So we gotta worry that some PCE is not going to get better unless we come up with new treatments.
  • There are almost certainly other viruses that cause similar conditions (post-viral encephalopathy). Maybe non-COVID coronavirus URIs aren't as benign as we thought.
Maybe in 2030 I'll come across this and update with how it turned out.

Friday, December 06, 2019

The killer application for Apple's AR glasses will be driving

Sucks to get old. At 60 my night vision is probably half of what it was at 25. I drive slowly at night to reduce the risk of missing a pedestrian.

What I need are AR glasses that receive input from forward facing light sensitive sensors and that enhance what I see as I drive. Draw circles around pedestrians. Turn night into day. With the usual corrective lenses of course.

I’d pay a few thousand for something like that.

Seems quite doable.

Saturday, August 17, 2019

Sorrow for the Long Tail - the memory machine I will never see

There are several software products I want nobody will build.

For example, I want a “screen saver” that will randomly select from a collection of video and still images and display them across multiple screens.

Pretty much like Apple’s annoying [1] screen saver, but for video it would randomly select a xx second file segment and play that without sound.

I don’t think anyone will ever build this. It’s too hard to do [2] and there’s no money in it. Only a small number of people would pay, say, $20 for this. Maybe 1 in a 1000. After expenses and marketing it would be hard to earn even a few thousand dollars.

Which reminds us of the false promise of The Long Tail. Those were the days that Netflix had a huge catalogue of barely viewed movies [3] that were often very fine. We thought there would be business for the interests of the 0.1%. That didn’t happen.

This is why I’ve given up on trying to predict the future ...

--

[1] Whenever macOS cannot connect to the folder hosted on my NAS it reverts to the default collection. I need to restore my share and I’ve never been able to find an automated way to do that. On iOS things are much worse. Speaking of products I want, I’d pay $20 for a macOS utility that that simply reset my screen saver to my preferred share.

[2] We never thought software development would keep getting harder. We used to think there would be a set of composable tools we could all use (OpenDoc, AppleScript, etc). We expected a much more advanced version of what we had on DOS or Unix in the 80s or the early 90s web. Instead we got AngularJS.

[3] In the mailer days our kids movies were unplayable due to disc damage about half the time. Finally gave up on that.

Saturday, February 02, 2019

Against superhuman AI

I am a strong-AI pessimist. I think by 2100 we’ll be in range of sentient AIs that vastly exceed human cognitive abilities (“skynet”). Superhuman-AI has long been my favorite answer to the Fermi Paradox (see also); an inevitable product of all technological civilizations that ends interest in touring the galaxy.

I periodically read essays claiming superhuman-AI is silly, but the justifications are typically nonsensical or theological (soul-equivalents needed).

So I tried to come up with some valid reasons to be reassured. Here’s my list:

  1. We’ve hit the physical limits of our processing architecture. The “Moore-era” is over — no more doubling every 12-18 months. Now we slowly add cores and tweak hardware. The new MacBook Air isn’t much faster than my 2015 Air. So the raw power driver isn’t there.
  2. Our processing architecture is energy inefficient. Human brains vastly exceed our computing capabilities and they run on a meager supply of glucose and oxygen. Our energy-output curve is wrong.
  3. Autonomous vehicles are stuck. They aren’t even as good as the average human driver, and the average human driver is obviously incompetent. They can’t handle bicycles, pedestrians, weather, or map variations. They could be 20 years away, they could be 100 years away. They aren’t 5 years away. Our algorithms are limited.
  4. Quantum computers aren’t that exciting. They are wonderful physics platforms, but quantum supremacy may be quite narrow.
  5. Remember when organic neural networks were going to be fused into silicon platforms? Obviously that went nowhere since we no longer hear about it. (I checked, it appears Thomas DeMarse is still with us. Apparently.)

My list doesn’t make superhuman-AI impossible of course, it just means we might be a bit further away, closer to 300 years than 80 years. Long enough that my children might escape.

Sunday, December 30, 2018

Why the crisis of 2016 will continue for decades to come

I haven’t written recently about why Crisis 2016, sometimes called Crisis-T, happened. For that matter, why Brexit. My last takes were in 2016 …

  • In defense of Donald Trump - July 2016. In which I identified the cause of the crisis, but assumed we’d dodge the bullet and HRC would tend to the crisis of the white working class.
  • Trumpism: a transition function to the world of mass disability - Aug 2016. “How does a culture transition from memes of independence and southern Christian-capitalist marketarianism to a world where government deeply biases the economy towards low-education employment?"
  • After Trump: reflections on mass disability in a sleepless night - Nov 11, 2016. "Extreme cultural transformation. Demographics. China. The AI era and mass disability. I haven’t even mentioned that pre-AI technologies wiped out traditional media and enabled the growth of Facebook-fueled mass deception alt-media … We should not be surprised that the wheels have come off the train.”
  • Crisis-T: What’s special about rural? - Nov 16, 2016: "The globalization and automation that disabled 40% of working age Americans isn’t unique to rural areas, but those areas have been ailing for a long time. They’ve been impacted by automation ever since the railroad killed the Erie canal, and the harvester eliminated most farm workers. Once we thought the Internet would provide a lifeline to rural communities, but instead it made Dakka as close as Escanaba.”

How has my thinking changed two years later? Now I’d add a couple of tweaks, especially the way quirks of America’s constitution amplified the crisis. Today’s breakdown:

  • 65% the collapse of the white non-college “working class” — as best measured by fentanyl deaths and non-college household income over the past 40 years. Driven by globalization and IT both separately and synergistically including remonopolization (megacorp). This is going to get worse.
  • 15% the way peculiarities of the American constitution empower rural states and rural regions that are most impacted by the collapse of the white working class due to demographics and out-migration of the educated. This is why the crisis is worse here than in Canada. This will continue.
  • 15% the long fall of patriarchy. This will continue for a time, but eventually it hits the ground. Another 20 years for the US?
  • 5% Rupert Murdoch. Seriously. In the US Fox and the WSJ, but also his media in Australia and the UK. When historians make their list of villains of the 21st century he’ll be on there. He’s broken and dying now, but he’s still scary enough that his name is rarely mentioned by anyone of consequence.
  • 1% Facebook, social media, Putin and the like. This will get better.

That 1% for Facebook et all is pretty small — but the election of 2016 was on the knife’s edge. That 1% was historically important.

Rupert Murdoch will finally die, though his malignant empire will grind on for a time. Patriarchy can’t fall forever, eventually that process is done. We now understand the risks of Facebook and its like and those will be managed. So there’s hope.

But the crisis of the white non-college will continue and our constitution will continue to amplify that bloc’s political power in rural areas. Even if civilization wins in 2020 the crisis of 2016 will continue. It will test human societies for decades to come.

Sunday, August 28, 2016

Trumpism: a transition function to the world of mass disability.

We know the shape of the socioeconomic future for the bottom 40% in the post globalization post AI  mass disability world.

But how do we get there? How does a culture transition from memes of independence and southern Christian-capitalist marketarianism to a world where government deeply biases the economy towards low-education employment?

There needs to be a transition function. A transform that is applied to a culture. With the anthropology perspective I’ve long sought Arlie Hochschild makes the case that Trump is, among other things, a transition function that erases Tea Party Marketarianism and embraces the heresy of government support (albeit for the “deserving”).

In a complex adaptive system we get the transition function we need rather than the one we want. No guarantee we survive it though.

See also:

Thursday, August 25, 2016

What socioeconomic support will look like in 20 years

This is what I think socioeconomic support will look like in 2040 based on cognitive [2] quintiles.

The bottom quintile (0-20%, non-voters) will have supported work environments and direct income subsidies; an improved version of what most [1] wealthy nations do for the 0-5% of adults currently considered cognitively “disabled” [1].

The second quintile (20-40%, Trump base if white) will have subsidized employment (direct or indirect).

The fifth quintile (80-100%) will live much as they do now.

I don’t know what happens to the 3rd and 4th quintile.

- fn -

[1] The US is currently “mainstreaming” the cognitively disabled into relatively unsupported work, a well intentioned and evidence-free project by (my) Team Liberal that is going to end in tears.

[2]  In US male euros (avoid racism/sexism effects) maps to academic achievement which tests learning, social skills, temperament and the like.

Monday, September 14, 2015

Google Trends: Across my interests some confirmation and some big surprises.

I knew Google Trends was “a thing”, but it had fallen off my radar. Until I wondered if Craigslist was going the way of Rich Text Format. That’s when I started playing with the 10 year trend lines.

I began with Craigslist and Wikipedia...

  • Craigslist is looking post-peak
  • Wikipedia looks ill, but given how embedded it is in iOS I wonder if that’s misleading.
Then I started looking at topics of special relevance to my life or interests. First I created a set of baselines to correct for decliniing interest in web search. I didn’t see any decline
  • Cancer: rock steady, slight dip in 2009, slight trend since, may reflect demographics
  • Angina: downward trend, but slight. This could reflect lessening interest in search, but it may also reflect recent data on lipid lowering agents and heart disease.
  • Exercise: pretty steady
  • Uber: just to show what something hot looks like. (Another: Bernie Sanders)
Things look pretty steady over the past 10 years, so I decided I could assume a flat baseline for my favorite topics.That’s when it got fascinating. 

Some of these findings line up with my own expectations, but there were quite a few surprises. It’s illuminating to compare Excel to Google Sheets. The Downs Syndrome collapse is a marker for a dramatic social change — the world’s biggest eugenics program — that has gotten very little public comment. I didn’t think interest in AI would be in decline, and the Facebook/Twitter curves are quite surprising.

Suddenly I feel like Hari Seldon.

I’ll be back ...

See also:

Saturday, November 15, 2014

After the Apple Watch debacle - the Nano recovery

Seven years ago Clayton “Innovators Dilemma” Christensen wrote ..

Clayton "innovators dilemma" Christensen: Apple will fail

… the prediction of the theory would be that Apple won’t succeed with the iPhone. They've launched an innovation that the existing players in the industry are heavily motivated to beat: It's not [truly] disruptive. History speaks pretty loudly on that, that the probability of success is going to be limited…

By “existing players” he meant Nokia (now a forgotten part of Microsoft). That’s the problem with making testable predictions — they break theories.

Which brings me to the aWatch, of which I am not a fan …

Gordon's Notes: Apple Watch - a bridge too far

… I don’t think the 1st generation Apple Watch will be nearly as successful in the US market, though it may have some success in its true target market of China. Unlike the much loved Nano-clip it doesn't solve anyone's problems well. An water-susceptible exercise device tied to an iPhone is far less useful than an inexpensive FitBit. An authentication device tied to an iPhone is redundant in today's world. The iWatch Apple Watch is a very limited music and video platform. It’s too big, it’s too expensive, it's too fragile (water), the battery is too small and the initial demo highlighted bumping hearts...

… A waterproof $150 iOS 8 Nano-clip replacement in Sept 2015 will be interesting. Splitting the cellular phone into multiple components, for which iPad and Apple Watch are interaction elements will be interesting. Standalone Apple Watch 4 running on next-generation LTE will be interesting.

Apple Watch 1 is a mistake.

The aWatch will launch in the US and Chinese markets in a few months. It will fail early in the US market. There will be initial success in China, then it will fall to China’s chaotic nationalism and less expensive and more useful Chinese clone-variants. It may have some persistent sales in Japan.

So what happens after that?

Jonathon Ive either leaves Apple or tolerates a diminished role. He’s very wealthy and has accomplished much, so we shouldn’t feel too sad for him. Tim Cook moves his executive team around and puts his rhinoceros skin to good use. The share price dips and returns to trend line.

I think that will all be good.

The interesting bit is what happens to the aWatch tech and how soon will we see it in another form?

The timing depends on what Apple really thinks is going to happen to the aWatch. I assume that some execs expect it to fail and that there’s a plan B, and maybe a Plan C, in the works. So what should we expect in the fall of 2015?

Physically the Plan B device looks a lot like the much beloved 6th Generation Nano Clip. It will be designed to work with a wrist band or a clothing clip. It will be an excellent 32GB store music device but will also act as a detached extension companion to an iPhone. It will be good at caching data and then posting it back when in phone range. It will have some GPS functionality (limited) and some exercise tracking ability when attached to the wrist.

Plan B will be modestly successful worldwide.

Then there’s Plan C. I owe Plan C to @duerig, who carries an ultra-slim flip phone and an iPad. I’m convinced he’s got things right — that the world is going to go towards people who carry just a phablet and people who carry a phablet and a mini-phone. Apple’s got the phablet market covered with the 6+. Plan C is a slightly larger and heavier version of the 7th generation nano paired with an iPad Mini 4 [1]. This iPhoneMini is a device Apple considered launching in 2011 and the iPad Mini 4 replaces the already forgotten iPad Mini 3 (Apple’s feeblest product hop ever).

I might buy Plan B (iOS nanoClip), I would definitely buy Plan C (iPhoneMini + iPad mini 4). 

Both of these are good futures that will leverage aWatch investments. Look for Cook to announce them when Apple buries the 1st generation aWatch. Which means that stock dip will be short-lived.

Saturday, April 26, 2014

Salmon, Picketty, Corporate Persons, Eco-Econ, and why we shouldn't worry

I haven’t read Picketty’s Capital in the Twenty-First Century. I’ll skim it in the library some day, but I’m fine outsourcing that work to DeLong, Krugman and Noah.

I do have opinions of course! I’m good at having opinions.

I believe Picketty is fundamentally correct, and it’s good to see our focus shifting from income inequality to wealth inequality. I think there are many malign social and economic consequences of wealth accumulation, but the greatest threat is likely the damage to democracy. Alas, wealth concentration and corruption of government are self-reinforcing trends. It is wise to give the rich extra votes, lest they overthrow democracy entirely, but fatal to give them all the votes.

What I haven’t seen in the discussions so far is the understanding that the modern oligarch is not necessarily human. Corporations are persons too, and even the Kock Brothers are not quite as wealthy as APPL. Corporations and similar self-sustaining entities have an emergent will of their own; Voters, Corporations and Plutocrats contend for control of avowed democracies [1]. The Rise of the Machine is a pithy phrase for our RCIIT disrupted AI age, but the Corporate entity is a form of emergent machine too.

So when we think of wealth and income inequality, and the driving force of emergent process, we need to remember that while Russia’s oligarchs are (mostly vile) humans, ours are more mixed. That’s not necessarily a bad thing - GOOGL is a better master than David Koch. Consider, for example, the silencing of Felix Salmon:

Today is Felix's last day at Reuters. Here's the link to his mega-million word blog archive (start from the beginning, in March 2009, if you like). Because we're source-agnostic, you can also find some of his best stuff from the Reuters era at Wired, Slate, the Atlantic, News Genius, CJR, the NYT, and NY Mag. There's also Felix TV, his personal site, his Tumblr, his Medium archive, and, of course, the Twitter feed we all aspire to.

Once upon a time, a feudal Baron or Russian oligarch would have violently silenced an annoying critic like Salmon (example: Piketty - no exit). Today’s system simply found him a safe and silent home. I approve of this inhuman efficiency.

So what comes next? Salmon is right that our system of Human Plutocrats and emergent Corporate entities is more or less stable (think - stability of ancient Egypt). I think Krugman is wrong that establishment economics fully describes what’s happening [2]; we still need to develop eco-econ — which is notecological economics”. Eco-econ is the study of how economic systems recapitulate biological systems; and how economic parasites evolve and thrive [3]. Eco-econ will give us some ideas on how our current system may evolve.

In any event, I’m not entirely pessimistic. Complex adaptive systems have confounded my past predictions. Greece and the EU should have collapsed, but the center held [4]. In any case, there are bigger disruptions coming [5]. We won’t have to worry about Human plutocrats for very long….

See also

and from my stuff

- fn -

[1] I like that 2011 post and the graphic I did then. I’d put “plutocrats” in the upper right these days. The debt ceiling fight of 2011, showed that Corporations and Plutocrats could be smarter than Voters, and the rise of the Tea Party shows that Corporations can be smarter than Voters and Plutocrats. Corporations, and most Plutocrats, are more progressive on sexual orientation and tribal origin than Voters. Corporations have neither gender nor pigment, and they are all tribes of one.

I could write a separate post about why I can’t simply edit the above graphic, but even I find that tech failure too depressing to contemplate.

[2] I don’t think Krugman believes this himself - but he doesn’t yet know how to model his psychohistory framework. He’s still working on the robotics angle.

[3] I just made this up today, but I dimly recall reading that the basic premises of eco-econ have turned up in the literature many times since Darwin described natural selection in biological systems. These days, of course, we apply natural selection to the evolution of the multiverse. Applications to economics are relatively modest.

[4] Perhaps because Corporations and Plutocrats outweighed Voters again — probably better or for worse.

[5] Short version — we are now confident that life-compatible exoplanets are dirt common, so the combination of the Drake Equation (no, it’s not stupid) and the Fermi Paradox means that wandering/curious/communicative civilizations are short-lived. That implies we are short-lived, because we’re like that. The most likely thing to finish us off are our technological heirs.

Sunday, June 09, 2013

Cash purchases driving a new real estate bubble - too much wealth, too few investments

Cash-only real estate speculation in LA, Boston, Miami, San Francisco and so on (emphases mine) ...

... These days, the only way for would-be buyers to secure a home, it often seems, is to offer all cash and be ready to do so within hours, not days.

...first-time home buyers are competing with investors to get into single-family homes with prices approaching $1 million.

... large investors purchasing thousands of properties

... a third of all homes purchased in Los Angeles during the first quarter of this year went for all cash, compared with just 7 percent in 2007. In Miami, 65 percent of homes sold were for cash deals, compared with 16 percent six years ago.

... In Los Angeles, the median price on an all-cash home this year is about $351,000, compared with $230,000 in 2009. Over the same period, the median price over all increased to $410,000, up $85,000. In fact, last month, home prices in Southern California hit their highest level in the last five years.

... Buyers in Boston are offering $100,000 more than the asking price or placing offers on homes they have spent only minutes in.

... He also waived the inspection clause, an increasingly common practice... offers today are more likely to include escalation clauses, saying buyers will pay an additional amount over the highest bid.

... cash purchases fueled in part by international investors and retirees awash in cash after selling their homes elsewhere....

This fits reports a few months back of large numbers of purchased but unoccupied condominiums in luxury markets.

Where is all the cash coming from? The article doesn't say, but there's vast wealth in China now and few safe places to park it. Real estate is a classic Chinese investment. There's also a large amount of boomer wealth in play as my generation (noisily, because we are nothing if not loud) shuffles off the stage.

What happens next? I assume we're in for another one of our worldwide boom-bust cycles...

Gordon's Notes: Stock prices - resorting to another dumb hydraulic analogy

NewImage

Why are having these worldwide boom bust cycles? 

Ahh, if only we knew. Since I'm not an economist, and thus I have neither credibility to protect nor Krugman to fear, I'm free to speculate. I think the world's productive capacity has grown faster than the ability of our financial systems to manage it. There's too much wealth and potential wealth (in a fundamental sense, regardless of central bank actions) for our system to productively absorb. We're filling a 100 ml vial from a 10 liter bucket. Or, in Bernd Jendrissek's words: "The gain is too high for the phase shift for this feedback loop to be stable."

If there's anything to this idea then we little people may want to remember the advice of Victor Niederhoffer, a wealthy man who has lost vast amounts of money in the post RCIIIT economy:

... Whenever disaster strikes, the very sagacious wealthy people take their canes, and they hobble down from their stately mansions on Fifth Avenue, and they buy stocks to the extent of their bank balances, and then a week or two later, the market rises, they deposit the overplus in their accounts, invest it in blue-chip real estate, and retire back to their stately mansions. That's probably the best way of making money, to be a specialist in panics. Whenever there's panic hanging in the air, that's a great time to invest...

Of course this implies one has a relatively tax efficient way of moving money in and out of cash -- and lots of cash to gamble without fear of unemployment. When downturns hit most of us need our cash as a hedge against job loss; only the 0.05% don't need to work. Even so, there may be a lesser version of the long game we can play to at least limit our crash pain. For example, perhaps a 21st century John Bogle will create a derivative that retail investors can purchase on the rise (when we have cash) that pays off on the fall (when we don't).

How long will it be before the world's financial systems catch up with our productive capacity -- especially given the rise of Africa and the unfolding of the post-AI world?

I suspect not in my lifetime [1]. It's whitewater as far as the eye can see.

Update: In surfing lingo a hard breaking wave is a called a "Cruncher". Perhaps "new Cruncher" is a better term than "new bubble".

- fn -

[1] Though if wealth were better distributed we might have the equivalent of filling that 100 ml vial from 10,000 1 ml vials. Much easier to stop before overfilling.

Saturday, May 25, 2013

Android tablet price crash: do we have a cereal box computer yet?

Fifteen years ago, I predicted sand-based tablet devices would soon follow the price-collapse trajectory of the pocket calculator. They would become so inexpensive that cheap versions would show up in cereal boxes. I was remembering the price crash that happened shortly after my family spent the equivalent of 500 2013 dollars to buy me a four function desktop calculator. (We were poor. That hurt.)

Cereal computer

Like a stuck clock I continued to repeat my prediction over the many years to come, albeit with less conviction. Finally, in 2010, Gassée told us Google was aiming for the $80 smartphone [1]. Which may have happened this year, albeit without much attention.

We have since moved closer to the cereal computer; eqe reports buying an Android tablet for $35 in Hong Kong in Nov 2012. That price presumably omits patent payments [5]; it is possible because AndroidOS is available without charge and Chinese factories have excess capacity to produce commodity components.

So, at last, the price collapse seems to be happening. So the question is why now?

One answer is that Moore's Law is failing; computers that were once good for only 2-3 years now work perfectly well for six years or more (barring component failure).

On deeper reflection, however, I think that's the wrong answer -- because the question is misleading. The price of computing has not really collapsed; only computers have become inexpensive.

So we may soon have our cereal box computers, but they won't be worth much. That's because an AndroidOS based 2013 tablet is both a network peripheral and an ad-consumption peripheral that requires network access to be truly useful. Network access is still relatively costly, on the order of €250/year in cutting-edge Estonia [3].

Alas, just as it seemed I might hit my old target it split in two. I'll never hit it now, it no longer exists.

Eventually, of course, the direct cost of a certain form of computing will fall. Eventually GoogleOS devices will be able to access GoogleFunded networks for a very low cost [4]. Whether there will be other forms of computing at different prices remains to be seen.

The cereal computer remains one of my worst predictions.

See also

[1] I assume anyone reading this is smart enough to know that contract-bound prices aren't worth discussing.
[2] Perhaps by low cost 4G wireless piggybacked on the fiber network they're building out in the US.
[3] Much more in lagging-edge America. 
[4] We will be pay in other coins. 
[5] I believe part of the reason calculator prices crashed is that there was minimal IP protection in those days; software patents had not been invented. I recall reading that large parts of calculator functionality were not patented.

Update 5/26/13@danielgenser pointed me to a 2012 article on a limited circulation issue of Entertainment Weekly that included the guts of an ultra-cheap Chinese Android smartphone.

Sunday, May 19, 2013

Stock prices - resorting to another dumb hydraulic analogy

Stocks are overpriced again. It's probably not too much of a bubble (yet), but we continue to be significantly above "trend".

Market 85 to 2013

Whatever the heck that means. Economists no longer have rational models for stock prices, Apple's share price alone makes efficient market theory seem silly.

It is at times like this that barbers stock talking about stock picks, insider traders get arrested, deficit figures improve, and people notice that BlackRock holds 4 trillion dollars in US stocks. Yeah, trillion. Soon we'll see headlines, if Time is still around, declaring "America is back".

Inevitably, people who know nothing compare post-1995 to pre-1995 stock behavior. Around the time that IT started to transform the world, and China and India became more-or-less industrialized nations, share prices became wavy over a five year timeline ....

Wavy

Kind of like a roller coaster, which is what the last fifteen years have felt like. (Note roller coaster is "normal" to most people who read this, only old folks remember something more linear.)

We'd all love to know why this has happened, and if it's really going to go on like this for the next 30 years or so. So, in the last stage of desperation, amateurs like me resort to a hydraulic analogy.

Remember those trillions and trillions? It's as though they were a 10 liter bucket in the hands of BlackRock and the rest of us. The bucket is trying to hit the 1L mark in a 2L cylinder. It pours over the mark or under the mark. It's really hard to hit the mark. There's just too much money, and the market is too small.

We need a bigger market.

Update 5/26/13: I've been playing with this intuition, though I'm far from convinced it means anything. An obvious question is -- bigger compared to what? I think it's 'compared to the productive capacity of global economies. At this time, given the still underutilized potential of the educated populations of China and India, the potential of the post-AI era, and the unused capacity of recession-bound Europe, the global productive capacity is very large. Our public markets have grown over the past two decades, but my hunch is that this growth has been far exceeded by the world's productive capacity. Hence the need for bigger markets.

Saturday, March 23, 2013

Schneier: Security, technology, and why global warming isn't a real problem

In the Fever Days after September 2011, I wrote a bit about "the cost of havoc". The premise was that technology was consistently reducing the cost of havoc, but the cost of prevention was falling less quickly.

I still have my writing, but most of it is offline - esp. prior to 2004. As I said, those were the times of fever; back then we saw few alternatives to a surveillance society. Imagine that.

Ok, so that part did happen. On the other hand, we don't have Chinese home bioweapon labs yet. Other than ubiquitous surveillance, 2013 is more like 2004 than I'd expected.

The falling cost of offense/cost of defense ratio remains though. Today it's Schneier's turn to write about it… (emphases mine)

Schneier on Security: When Technology Overtakes Security

A core, not side, effect of technology is its ability to magnify power and multiply force -- for both attackers and defenders….

.. The problem is that it's not balanced: Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage. They're more nimble and adaptable than defensive institutions like police forces. They're not limited by bureaucracy, laws, or ethics. They can evolve faster. And entropy is on their side -- it's easier to destroy something than it is to prevent, defend against, or recover from that destruction.

For the most part, though, society still wins. The bad guys simply can't do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?

I don't think it can.

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious...and so on. A single attacker, or small group of attackers, can cause more destruction than ever before...

.. Traditional security largely works "after the fact"… When that isn't enough, we resort to "before-the-fact" security measures. These come in two basic varieties: general surveillance of people in an effort to stop them before they do damage, and specific interdictions in an effort to stop people from using those technologies to do damage.

Lots of technologies are already restricted: entire classes of drugs, entire classes of munitions, explosive materials, biological agents. There are age restrictions on vehicles and training restrictions on complex systems like aircraft. We're already almost entirely living in a surveillance state, though we don't realize it or won't admit it to ourselves. This will only get worse as technology advances… today's Ph.D. theses are tomorrow's high-school science-fair projects.

Increasingly, broad prohibitions on technologies, constant ubiquitous surveillance, and Minority Report-like preemptive security will become the norm..

… sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder...

… If security won't work in the end, what is the solution?

Resilience -- building systems able to survive unexpected and devastating attacks -- is the best answer we have right now. We need to recognize that large-scale attacks will happen, that society can survive more than we give it credit for, and that we can design systems to survive these sorts of attacks. Calling terrorism an existential threat is ridiculous in a country where more people die each month in car crashes than died in the 9/11 terrorist attacks.

If the U.S. can survive the destruction of an entire city -- witness New Orleans after Hurricane Katrina or even New York after Sandy -- we need to start acting like it, and planning for it. Still, it's hard to see how resilience buys us anything but additional time. Technology will continue to advance, and right now we don't know how to adapt any defenses -- including resilience -- fast enough.

We need a more flexible and rationally reactive approach to these problems and new regimes of trust for our information-interconnected world. We're going to have to figure this out if we want to survive, and I'm not sure how many decades we have left.

Here's shorter Schneier, which is an awful lot like what I wrote in 2001 (and many others wrote in classified reports):

  • Stage 1: Universal surveillance, polite police state, restricted technologies. We've done this.
  • Stage 2: Resilience -- grow accustomed to losing cities. We're  not (cough) quite there yet.
  • Stage 3: Resilience fails, we go to plan C. (Caves?)

Or even shorter Schneier

  • Don't worry about global warming.

Grim stuff, but I'll try for a bit of hope. Many of the people who put together nuclear weapons assumed we'd have had a history ending nuclear war by now. We've had several extremely close calls (not secret, but not widely known), but we're still around. I don't understand how we've made it this far, but maybe whatever got us from 1945 to 2013 will get us to 2081.

Another bright side -- we don't need to worry about sentient AIs. We're going to destroy ourselves anyway, so they probably won't do much worse.

Tuesday, January 01, 2013

Welcome to the 21st century: The primary themes

To be plausible, I've read, a novel must avoid reality.

What novel, for example, would start the 21st century with al Qaeda's attack on America? What novel would have an American President spend a trillion dollars and hundreds of thousands of lives attempting to recreate Grenada in Iraq while tossing aside the Laws of War?

Reality is not as cautious as writers. And so the 21st century began with the end of American exceptionalism. More than a decade later, we've got the feel of it. Not of the whole century for the whole world, but at least of the years from 2010 to 2040 for America.

What are the main themes? I'm sure I've missed a few, and of course there will be surprises, but here's my starter list: 

  • Demographics 1: From 2010 through 2040 America will be divided between an increasingly senile, largely white protestant, cohort born before 1964 and a relatively diverse and secular cohort born after 1964. The many "fiscal cliff" fights to come will reflect this shift.
  • Demographics 2: Even Hispanic birth rates are falling. The relative cost of children will continue to increase even as 93% of income growth goes to the top 1%. Given Demographics 1, American will have to attract millions of new immigrants -- even as the American brand struggles to recover from the Bush regime.
  • We are in the post-AI era of both great wealth and mass disability.
  • China and India - whether they thrive or struggle or both it's their story now.
  • Nuclear proliferation: More nuclear weapons, more launch systems, more hackable targets. Iran, North Korea, Pakistan, India ... and so on. [1]
  • The cost of havoc will continue to fall. I was really torqued about this in the months after 9/2011; I didn't see how we'd avoid turning into a surveillance society (at best) or an authoritarian state. Well, on the one hand we did turn into a surveillance society, but on the other hand we haven't seen any home-brewed bioweapons yet (except for this one of course)[2]. I still think this problem is not going away, and neither is the surveillance state. 
  • Innovation gap: AI aside, there's something wrong with the engines of our ingenuity. Maybe we've done all the easy stuff, maybe it's the NIH and the scientific-industrial complex, maybe it's because so much talent is wasted playing finance games, maybe it's the triumph of the Corporation and its IP laws, maybe it's all of the above and more. This gap is a bigger threat to our future than social security or even  health care expenses.
  • Winner take all: It is insane that growth in our economic output is going to such a thin slice of our population -- 37% going to 15,000 households.
  • The triumph of the mega-corporation: For better and for worse, but mostly for worse, the large centrally-planned Corporation will rule the American economic landscape for decades to come. Elephants have made the ecosystem of the African Plains, and Corporations have made the laws and accounting systems of America. Citizen's United will shape the decades to come.
  • Weather adaptation: The big devastation from CO2 emissions is probably in Book Two, but Book One will have big enough problems. We will eventually adopt carbon taxes; driven both by need to raise revenues (see above) and by the slow acceptance that we've whacked the Earth pretty hard.
  • Good enough health care: After exhausting every other option, the US will come to accept good enough health care.
  • No more big US wars: Being old and worried about budgets is not all bad.
It's a daunting list, but it's a list of challenges and fixable problems, not of disasters. Spicy food, chewy and a bit green on the edges, but edible with a bit of chewing. It could be worse.
 
- fn -
[1] There are two strong arguments for supernatural entities. One is the arrow of time (entropy low at t=0). The other is that we have not yet had a true nuclear war - despite all our close calls.
[2] Oh, yeah, and what novel would have a bioweapon attack follow 9/11, be used to justify a major war, and then be completely forgotten? 

Friday, December 28, 2012

The Post-AI era is also the era of mass disability

Is Stephen Hawking disabled?

Stephen Hawking 050506

Obviously this is a rhetorical question. Hawking is 70 and retired from the Lucasian chair, but he remains a tenured professor. He is a bestselling author of multiple popular books, has been married twice, and has three children.

He is clearly not disabled.

Is a physically strong male with an IQ of 65 disabled in Saint Paul, MN? Yes, of course. Forty years ago, however, there were many jobs that would pay above minimum wage for a strong back and a willingness to do tedious work. Heck, in those days men earned money literally pumping gas.

Would Stephen Hawking have been disabled in 1860? Yeah, for the short duration of his 19th century life.

Disability is relative to the technological environment. Once a missing leg meant disability, now it rules out only a small number of jobs. Once a strong back meant a job, now it means little.

Technology changes the work environment; it makes some disabled, and others able. It's an old trend, automated looms put textile artisans out of work 200 years ago.

Those artisans had a rough time, but workers with similar skill sets have done well since. Economic theory and history teaches us that disruptive technological transformation can produce transient chaos, but over time resulting economic growth will benefit almost everyone. More or less.

But history only repeats until it doesn't. Economic benefits don't have to be evenly distributed. If fewer jobs require strong backs, then people whose primary talent is the strength of their spine may earn relatively less. If supply exceeds demand, the price of labor will fall below the "zero bound" of the minimum wage. Some backs won't find work; those workers are disabled.

Most people can play in more than one game, but the competition is getting tougher and the space for human advantage is shrinking in the post-AI era. The percentage of the population who are effectively disabled has been rising along with national income and the Gini coefficient. It's not just the pioneers now, Respectable economists are wondering about tipping points.

So enter The Wolverine...

Screen Shot 2012 12 28 at 6 13 51 PM 

Krugman acts as though he's just started thinking about the post-AI economy, but he isn't fooling anyone. We know he grew up on Asimov and the Three Laws. Now that the election is done, and he doesn't have to be a strict non-structuralist any more [1], he's started writing about what the post-AI era means for income inequality using the phrase "Capital-biased technology". He has recently promised us a "future" column on policy implications.

Future - because he's trying to break it to us gently. I, of course, have no such qualms. A year ago I wrote about the policy implications of the Post-AI era (emphases added) ...

The AI Age: Siri and Me

... Economically, of course, the productivity/consumption circuit has to close... If .1% of humans get 80% of revenue, then they'll be taxed at 90% marginal rates and the 99.9% will do subsidized labor. That's what we do for special needs adults now, and we're all special needs eventually...

Or, in other words, "From each according to his ability, to each according to his need". In the post-AI era we will need to create employment for the mass disabled.

See also:

 and from the K (NYT):

elsewhere

- fn -

[1] Clark Goble made me read a critique of my team's champion. I found hurt feelings (K has claws), but no substantive critiques. That's a shame, I've long wanted to see somebody like Mankiw (who was once readable) engage K on his denial of structural factors in 2009-2012 unemployment. I suspect K has always known of ways to argue the structural case despite the persuasive low global-demand data. I wonder if he was disappointed that nobody dared challenge him. 

Tuesday, July 10, 2012

Health care: We don't want more stuff, we want more years.

Stanford's Chad Jones and Robert Hall tell us health care spending really is different ...

Why Americans want to spend more on health care (Louis Johnston, MinnPost, 7/6/12)

... Income elasticity measures how much more of a good or service a person will buy if their income goes up by 1 percent. For most goods and services this number is less than 1; that is, if income rises then people will buy more of most goods but they will increase their purchases by less than 1 percent. 

Years of life are different. If you have a medical procedure that extends your life, then the first, second, third and however many extra years you receive are all equally valuable. So if your income rises by 1 percent, you will increase your spending on medical care by at least 1 percent, and possibly more.

Jones, along with Robert E. Hall (also of Stanford) embedded this idea in an economic model and found that it does a good job predicting the path of health care expenditures from 1950 to 2000. Further, they show that if this is true, then the share of GDP we devote to health care could easily rise to 30 percent or more over the next 50 years as people choose to spend more on health care to obtain more years of life.

Thinking about the rise in medical spending this way puts health care policy in a different light. People want to live longer, better lives, and they are willing to pay for it. They don’t want more stuff, they want more life...

Life extending [1] health care is an inexhaustible good. That's what simplistic happiness studies, like a pseudo-science [2] article claiming that $75,000 is "enough", usually miss. They implicitly assume, or indirectly measure, good health [3].

Years ago, when health care spending was a mere 12% of GDP (we're about 15% now), my partner, Dr. John H, saw no reason why it wouldn't, and shouldn't rise to a then unthinkable 15% or more. His point was that people like being healthy, and to the extent that health care works, they will want more of it.

Health care that is perceived to be effective is the ultimate growth industry.

That's why this is where we'll end up. We could do much worse.

[1] A shorthand for extending life that we care about, particularly life-years of loved ones. More years of dementia don't count, though significant disability has less impact that many imagine. I assume there's some amount of quality lifespan that would, depending on one's memory, have an income elasticity of less than one. Science fiction writers often put that at somewhere between 300 and 30,000 years.
[2] I read the published study; "Participants answered our questions as part of a larger online survey, in return for points that could be redeemed for prizes." Can you image a less representative population? Needless to say they didn't define what household income meant, yet they turned this into a NYT article.
[3] The Jimmy Johns' insultingly stupid parable of the mexican banker is a particularly egregious example. 

Thursday, July 05, 2012

Google's Project Glass - it's not for the young

I've changed my mind about Project Glass. I thought it was proof that Brin's vast wealth had driven him mad, and that Google was doing a high speed version of Microsoft's trajectory.

Now I realize that there is a market.

No, not the models who must, by now, be demanding triple rates to appear in Google's career-ending ads.

No, not even Google's geeks, who must be frantically looking for new employment.

No, the market is old people. Geezers. People like me; or maybe me + 5-10 years.

We don't mind that Google Glass looks stupid -- we're ugly and we know it.

We don't mind that Google Glass makes us look like Borg -- we're already good with artificial hips, knees, lenses, bones, ears and more. Nature is overrated and wears out too soon.

We don't mind wearing glasses, we need them anyway.

We don't mind having something identifying people for us,  recording where we've been and what we've done, selling us things we don't need, and warning us of suspicious strangers and oncoming traffic. We are either going to die or get demented, and the way medicine is going the latter is more likely. We need a bionic brain; an ever present AI keeping us roughly on track and advertising cut-rate colonoscopy.

Google Glass is going to be very big. It just won't be very sexy.