Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Monday, February 20, 2023

Be afraid of ChatGPT

TL;DR: It's not that ChatGPT is miraculous, it's that cognitive science research suggests human cognition is also not miraculous.

"Those early airplanes were nothing compared to our pigeon-powered flight technology!"

https://chat.openai.com/chat - "Write a funny but profound sentence about what pigeons thought of early airplanes"

Relax

Be Afraid

ChatGPT is just a fancy autocomplete.

Much of human language generation may be a fancy autocomplete.

ChatGPT confabulates.

Humans with cognitive disabilities routinely confabulate and under enough stress most humans will confabulate. 

ChatGPT can’t do arithmetic.

IF a monitoring system can detect a question involves arithmetic or mathematics it can invoke a math system*.


UPDATE: 2 hours after writing this I read that this has been done.

ChatGPT’s knowledge base is faulty.

ChatGPT’s knowledge base is vastly larger than that of most humans and it will quickly improve.

ChatGPT doesn’t have explicit goals other than a design goal to emulate human interaction.

Other goals can be implemented.

We don’t know how to emulate the integration layer humans use to coordinate input from disparate neural networks and negotiate conflicts.

*I don't know the status of such an integration layer. It may already have been built. If not it may take years or decades -- but probably not many decades.

We can’t even get AI to drive a car, so we shouldn’t worry about this.

It’s likely that driving a car basically requires near-human cognitive abilities. The car test isn’t reassuring.

ChatGPT isn’t conscious.

Are you conscious? Tell me what consciousness is.

ChatGPT doesn’t have a soul.

Show me your soul.

Relax - I'm bad at predictions. In 1945 I would have said it was impossible, barring celestial intervention, for humanity to go 75 years without nuclear war.


See also:

  • All posts tagged as skynet
  • Scott Aaronson and the case against strong AI (2008). At that time Aaronson felt a sentient AI was sometime after 2100. Fifteen years later (Jan 2023) Scott is working for OpenAI (ChatGPT). Emphases mine: "I’m now working at one of the world’s leading AI companies ... that company has already created GPT, an AI with a good fraction of the fantastical verbal abilities shown by M3GAN in the movie ... that AI will gain many of the remaining abilities in years rather than decades, and .. my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world."
  • Imagining the Singularity - in 1965 (2009 post.  Mathematician I.J. Good warned of an "intelligence explosion" in 1965. "Irving John ("I.J."; "Jack") Good (9 December 1916 – 5 April 2009)[1][2] was a British statistician who worked as a cryptologist at Bletchley Park."
  • The Thoughtful Slime Mold (2008). We don't fly like bird's fly.
  • Fermi Paradox resolutions (2000)
  • Against superhuman AI: in 2019 I felt reassured.
  • Mass disability (2012) - what happens as more work is done best by non-humans. This post mentions Clark Goble, an app.net conservative I miss quite often. He died young.
  • Phishing with the post-Turing avatar (2010). I was thinking 2050 but now 2025 is more likely.
  • Rat brain flies plane (2004). I've often wondered what happened to that work.
  • Cat brain simulator (2009). "I used to say that the day we had a computer roughly as smart as a hamster would be a good day to take the family on the holiday you've always dreamed of."
  • Slouching towards Skynet (2007). Theories on the evolution of cognition often involve aspects of deception including detection and deceit.
  • IEEE Singularity Issue (2008). Widespread mockery of the Singularity idea followed.
  • Bill Joy - Why the Future Doesn't Need Us (2000). See also Wikipedia summary. I'd love to see him revisit this essay but, again, he was widely mocked.
  • Google AI in 2030? (2007) A 2007 prediction by Peter Norvig that we'd have strong AI around 2030. That ... is looking possible.
  • Google's IQ boost (2009) Not directly related to this topic but reassurance that I'm bad at prediction. Google went to shit after 2009.
  • Skynet cometh (2009). Humor.
  • Personal note - in 1979 or so John Hopfield excitedly described his work in neural networks to me. My memory is poor but I think we were outdoors at the Caltech campus. I have no recollection of why we were speaking, maybe I'd attended a talk of his. A few weeks later I incorporated his explanations into a Caltech class I taught to local high school students on Saturday mornings. Hopfield would be about 90 if he's still alive. If he's avoided dementia it would be interesting to ask him what he thinks.

Friday, October 29, 2021

The Cybernated Generation: Time Magazine, April 2nd 1965

First check out the Time magazine covers for 1965. That was a very long time ago. Things have improved.

Now look at the April 2nd issue and particularly The Cybernated Generation. Every generation since 1965 has been declared cybernated or digitized or meta-sized.

The article is fascinating as a history of computing and our understanding of its impact -- and as a cultural artifact about a world of white men in white coats. There are no women save a secretary to "pass" at. There is no melanin. There are nerds. Some hyperbole aside there's not a lot that the author missed about the world to come...

As viewed by Sir Leon Bagrit, the thoughtful head of Britain's Elliot-Automation, the computer and automation will bring "the greatest change in the whole history of mankind.

... Boeing announced plans two weeks ago to outfit jetliners with computer-run systems that will land a plane in almost any weather without human help. A new "talking computer" at the New York Stock Exchange recently began providing instant stock quotations over a special telephone. In Chicago a drive-in computer center now processes information for customers while they wait, much as in a Laundromat. The New York Central recently scored a first among the world's railroads by installing computer-fed TV devices that will provide instant information on the location of any of the 125,000 freight cars on the road's 10,000 miles of track...

...  In 1834 an eccentric Englishman named Charles Babbage conceived the idea of a steam-driven "Analytical Engine" that in many details anticipated the basic principles of modern computers. 

... Even if no further advances were made in computer technology, some scientists maintain, the computer has provided enough work and opportunities for man for another thousand years....

... The most expensive single computer system in U.S. business is American Airlines' $30.5 million SABRE, a mechanical reservation clerk that gives instant up-to-the-minute information about every plane seat and reservation to American's 55 ticket offices. ...

... Computers now read electrocardiograms faster and more accurately than a jury of physicians. The Los Angeles police department plans to use computers to keep a collection of useful details about crimes and an electronic rogue's gallery of known criminals. And in a growing number of schools, computers have taken jobs as instructors in languages, history and mathematics...

... IBM is far and away the leader in the field, both in the U.S. and abroad...

... The computers have also spawned the so-called "software" industry, composed of computer service centers and independent firms that program machines and sell computer time (for as little as $10 an hour) to businesses that do not need a machine fulltime....

... Because computer technology is so new and computers require such sensitive handling, a new breed of specialists has grown up to tend the machines. They are young, bright, well-paid (up to $30,000) and in short supply. With brand-new titles and responsibilities, they have formed themselves into a sort of solemn priesthood of the computer, purposely separated from ordinary laymen. Lovers of problem solving, they are apt to play chess at lunch or doodle in algebra over cocktails, speak an esoteric language that some suspect is just their way of mystifying outsiders. Deeply concerned about logic and sensitive to its breakdown in everyday life, they often annoy friends by asking them to rephrase their questions more logically....

Until now computer experts could only communicate with their machines in one of 1,700 special languages, such as COBOL (Common Business Oriented Language), Fortran (Formula Translation), MAD (Michigan Algorithmic Decoder) and JOVIAL (Jules's Own Version of the International Algebraic Language). All of them are bewildering mixtures that only the initiated can decipher. Now some computers have reached the point where they can nearly understand—and reply in—plain English. The new Honeywell 20 understands a language similar enough to English so that an engineer can give it written instructions without consulting a programmer. The day is clearly coming when most computers will be able to talk back.

... Each week, the Government estimates, some 35,000 U.S. workers lose or change their jobs because of the advance of automation. There are also thousands more who, except for automation, would have been hired for such jobs. If U.S. industry were to automate its factories to the extent that is now possible—not to speak of the new possibilities opening up each year—millions of jobs would be eliminated. Obviously, American society will have to undergo some major economic and social changes if those displaced by machines are to lead productive lives.

Men such as IBM Economist Joseph Froomkin feel that automation will eventually bring about a 20-hour work week, perhaps within a century, thus creating a mass leisure class. Some of the more radical prophets foresee the time when as little as 2% of the work force will be employed, warn that the whole concept of people as producers of goods and services will become obsolete as automation advances. Even the most moderate estimates of automation's progress show that millions of people will have to adjust to leisurely, "nonfunctional" lives, a switch that will entail both an economic wrench and a severe test of the deeply ingrained ethic that work is the good and necessary calling of man...

... Many scientists hope that in time the computer will allow man to return to the Hellenic concept of leisure, in which the Greeks had time to cultivate their minds and improve their environment while slaves did all the labor. The slaves, in modern Hellenism, would be the computers...

... The computer has proved that many management decisions are routine and repetitive and can be handled nicely by a machine. Result: many of the middle management jobs of today will go to computers that can do just about everything but make a pass at a secretary...

... What it cannot do is to look upon two human faces and tell which is male and which is female, or remember what it did for Christmas five years ago." Bellman might get an argument about that from some computermen, but his point is valid...

... Most scientists now agree that too much was made in the early days of the apparent similarities between computers and the human brain. The vacuum tubes and transistors of computers were easy to compare to the brain's neurons—but the comparison has limited validity. "There is a crude similarity," says Honeywell's Bloch, "but the machine would be at about the level of an amoeba."... eventually the idea that a machine has humanlike intelligence will become part of folklore...

... In the years to come, computers will be able to converse with men, will themselves run supermarkets and laboratories, will help to find cures for man's diseases, and will automatically translate foreign languages on worldwide TV relayed by satellite. Optical scanning devices, already in operation in some companies, will eventually enable computers to gobble up all kinds of information visually. The machines will then be able to memorize and store whole libraries, in effect acquiring matchless classical and scientific educations by capturing all the knowledge to which man is heir....

... computers will eventually become as close to everyday life as the telephone—a sort of public utility of information...

... the computer is already upsetting old patterns of life, challenging accepted concepts, raising new specters to be conquered. Years from now man will look back on these days as the beginning of a dramatic extension of his power over his environment, an age in which technology began to recast human society. In the long run, the computer is not so much a challenge to man as a challenge for him: a triumph of technology to be developed, subdued and put to constantly increasing use.

Thursday, March 11, 2021

Weird world example: "virtual cameras" for online videoconferencing

Somewhere in my twitter stream mention was made of the "virtual camera" and its advantages for videoconferencing especially as a replacement for screen sharing.

Sounded interesting, so I went looking for more information. I thought it would be an easy google search.

All I could find, in written form, was one quickly written blog post from 2019.

Twenty-five years ago a similar topic would have had a deep technical article in BYTE and a myriad of articles in PC Magazine, Mac User, and the like. Fifteen years ago there would have been hundreds of excellent blog posts. Google would have found them all.

Now Google finds almost nothing.

How is discovery happening in 2021? I'm ancient, so I'm quite ready to believe there are sources I don't know about (and Google, evidently, doesn't care about). What are they? What is the replacement for Google search?

Friday, October 23, 2020

What I would add to the standard blog model

The combination of RSS and the basic blog post is almost perfect. There are, however, a few things that would make them even better as a way to share information in most organizations.

  1. Provide an email summary option that is generated on post, daily, or weekly. (SharePoint does this for some list types.)
  2. Add a Publication Date in addition to the Last Modified date and make the Publication Date both the primary sort field and make it user editable without changing the URL. That way a publisher can choose to easily replace a topical post with a newer version and optionally republish it as new. (For organizational use the historical record is unimportant.)
  3. Author control of notification generation. There's no need to generate a fresh RSS entry or email update for the correction of a few typos or other minor changes.
  4. Easy sharing of articles by email and other 'send' modalities.
  5. Tags with tag specific rss and views (many blogs do this).
  6. Editing tags should not generate notifications.
That's all. No comments -- in practice I think these can occur elsewhere.

PS. It looks like Blogger has broken tags/labels as of today! So this post has none.

Friday, December 06, 2019

The killer application for Apple's AR glasses will be driving

Sucks to get old. At 60 my night vision is probably half of what it was at 25. I drive slowly at night to reduce the risk of missing a pedestrian.

What I need are AR glasses that receive input from forward facing light sensitive sensors and that enhance what I see as I drive. Draw circles around pedestrians. Turn night into day. With the usual corrective lenses of course.

I’d pay a few thousand for something like that.

Seems quite doable.

Saturday, August 17, 2019

Sorrow for the Long Tail - the memory machine I will never see

There are several software products I want nobody will build.

For example, I want a “screen saver” that will randomly select from a collection of video and still images and display them across multiple screens.

Pretty much like Apple’s annoying [1] screen saver, but for video it would randomly select a xx second file segment and play that without sound.

I don’t think anyone will ever build this. It’s too hard to do [2] and there’s no money in it. Only a small number of people would pay, say, $20 for this. Maybe 1 in a 1000. After expenses and marketing it would be hard to earn even a few thousand dollars.

Which reminds us of the false promise of The Long Tail. Those were the days that Netflix had a huge catalogue of barely viewed movies [3] that were often very fine. We thought there would be business for the interests of the 0.1%. That didn’t happen.

This is why I’ve given up on trying to predict the future ...

--

[1] Whenever macOS cannot connect to the folder hosted on my NAS it reverts to the default collection. I need to restore my share and I’ve never been able to find an automated way to do that. On iOS things are much worse. Speaking of products I want, I’d pay $20 for a macOS utility that that simply reset my screen saver to my preferred share.

[2] We never thought software development would keep getting harder. We used to think there would be a set of composable tools we could all use (OpenDoc, AppleScript, etc). We expected a much more advanced version of what we had on DOS or Unix in the 80s or the early 90s web. Instead we got AngularJS.

[3] In the mailer days our kids movies were unplayable due to disc damage about half the time. Finally gave up on that.

Tuesday, February 26, 2019

Primary care 2019 vs. 1989

There’s been at least a 200-300% increase in care complexity between when I started medical practice and today. Many new classes of medications for fairly common disorders, many more specialty interventions that may be considered.

At the same time computer based clinical decision support systems have been a surprising failure. (Emily uses Epic, I use VistA/CPRS). In the 90s we expected far more than we actually got.

We are asking a lot of the modern primary care physician.

Monday, February 18, 2019

Greenlight card for "kids" - early impressions

I ordered the Greenlight cards for our children. Only one is a minor (and she uses a regular debit card) but two are special needs adults who are more vulnerable to financial scams or misjudgments. I also got one for a sibling with some similar issues though that is certainly not the Greenlight market.

So far it’s been a mixed experience. The Greenlight site has suprisingly poor documentation — basically some simplistic FAQs. They don’t document where the cards don’t work but from customer support I got:

Since our cards are meant for children, there are certain places that our cards will not work. Liquor stores, gambling websites or establishments, money orders, MoneySend and wire transfers are some examples of things that our cards will not work for. Since we are a prepaid debit card, we have also had families experience some trouble when attempting to use our cards to pay bills. We recommend not transferring funds to the Greenlight card that need to be used for utilities or other bills.

That seems reasonably, but it’s not the complete list. They don’t work for Patreon for example — my son wanted to donate there. Greenlight won’t provide a full list.

There’s also a problem with Greenlight.app behavior on one child’s phone. Again, this is undocumented, but I think there are two paths it should follow on launch. One path should enable access to card balance, the other is for requesting a card. On his phone it goes down the wrong path. Hard to sort out since, again, there’s no documentation.

The vibe I get from Greenlight is that is a venture funded effort that didn’t scale quickly enough …

Saturday, February 02, 2019

Against superhuman AI

I am a strong-AI pessimist. I think by 2100 we’ll be in range of sentient AIs that vastly exceed human cognitive abilities (“skynet”). Superhuman-AI has long been my favorite answer to the Fermi Paradox (see also); an inevitable product of all technological civilizations that ends interest in touring the galaxy.

I periodically read essays claiming superhuman-AI is silly, but the justifications are typically nonsensical or theological (soul-equivalents needed).

So I tried to come up with some valid reasons to be reassured. Here’s my list:

  1. We’ve hit the physical limits of our processing architecture. The “Moore-era” is over — no more doubling every 12-18 months. Now we slowly add cores and tweak hardware. The new MacBook Air isn’t much faster than my 2015 Air. So the raw power driver isn’t there.
  2. Our processing architecture is energy inefficient. Human brains vastly exceed our computing capabilities and they run on a meager supply of glucose and oxygen. Our energy-output curve is wrong.
  3. Autonomous vehicles are stuck. They aren’t even as good as the average human driver, and the average human driver is obviously incompetent. They can’t handle bicycles, pedestrians, weather, or map variations. They could be 20 years away, they could be 100 years away. They aren’t 5 years away. Our algorithms are limited.
  4. Quantum computers aren’t that exciting. They are wonderful physics platforms, but quantum supremacy may be quite narrow.
  5. Remember when organic neural networks were going to be fused into silicon platforms? Obviously that went nowhere since we no longer hear about it. (I checked, it appears Thomas DeMarse is still with us. Apparently.)

My list doesn’t make superhuman-AI impossible of course, it just means we might be a bit further away, closer to 300 years than 80 years. Long enough that my children might escape.

Saturday, October 20, 2018

We still need a way to explore and resurface old blog posts

Six years ago I wrote about browsing the blog blacklist and the need to resurface content from old blog postings. Today, even in the supposed twilight of blogs [1], I was again reminded how much we need a tool for excavation of old posts. I can think of at least one way do it (standard metadata for blog history, random selection of past posts based on internal and external inbound links) but there are probably several.

Maybe something for a future blog renaissance to tackle. Or if Feedbin is looking for a new feature set …

- fn -

[1] On the one hand I accept that RSS and blogs are vanishing, on the other my Feedbin stream is a rich and engrossing as ever, covering hundreds of sources.

Sunday, May 20, 2018

Snapshot of a changing world - electric fat bikes

This “out of stock” direct sale electric fat bike is a sign of the times…

Screen Shot 2018 05 20 at 8 26 12 AM

I remember when, just before the great .com crash, a mysterious ‘product x’ was going to revolutionize the world:

John Doerr speculated that it would be more important than the Internet.[6] South Park devoted an episode to making fun of the hype before the product was released. Steve Jobs was quoted as saying that it was "as big a deal as the PC",[6] 

That product turned out to be the Segway. Which was, and is, a pretty neat product — but perhaps a few decades ahead of its time.

Meanwhile, as a quiet consequence of LiOn battery evolution and China, eBikes are growing exponentially. A fusion between traditional bicycle and scooter.

Like most cyclists I have mixed feelings about eBikes. On balance I think they are a good thing, but they will certainly have bad consequences — especially when collisions happen. A 250 lb adult on a 50 lb eBike moving at 25 mph is a lot of kinetic energy.

Mostly though, this is a marker of a changing world.

Wednesday, April 18, 2018

Dyer on the 21st century crisis of mass unemployment

I believe this is true — though I’d be more confident if one of my favorite economists thought this was plausible (emphases mine):

If The Model Is Broken, Fix It | Gwynne Dyer

… The political model of Western-style democracy, which grew up alongside and then within a capitalist economic model, is now broken. Exhibit Number One is Donald Trump, but there’s lots of other evidence too.

One-third of French voters backed Marine Le Pen, a cleaned-up, user-friendly neo-fascist, in last year’s presidential election. In last September’s German election, one-eighth of the electorate voted for Alternative for Germany, a party whose more extreme wing is neo-Nazi – but it is now leads the opposition in the Bundestag, the German parliament.

Last month in Italy, the two biggest parties to emerge from the election were both led by populist rabble-rousers, one from the left and one from the right. Not to mention Brexit in Britain. And in every case the themes that dominated the populists’ rhetoric were racism, nationalism, hostility to immigrants – and jobs.

Trump rarely talked about anything else during the presidential election campaign: immigrants are stealing the jobs, free-trading American businessmen are exporting the jobs, the foreigners are eating America’s lunch….

Trump may not know a lot, but he knows One Big Thing. We are living in a new era of mass unemployment, and nobody has noticed. As Trump said the night after he won the New Hampshire primary in February 2016: “Don’t believe those phony numbers when you hear 4.9 and 5 percent unemployment. The number’s probably 28, 29, as high as 35. In fact, I even heard recently 42.”

It’s not really 42 percent, but it’s not 4.1 percent (the current official US rate) either. According to Nicholas Eberstadt’s ‘Men Without Work’, the real unemployment rate among American men of prime working age (24-55) – including those who don’t get counted because they have given up looking for work – is 17 percent.

Why didn’t we notice? Because the unemployed weren’t protesting in the streets like they did in the Great Depression of the 1930s, although the rate is getting up to Depression era levels. After the Second World War, all the Western democracies built welfare states, mainly so a new generation of radical populist leaders would not come to power the next time there is mass unemployment.

It has worked, in the sense that there is not blood in the streets this time around, but the jobless millions are very angry even if the welfare state means that they are not starving. They do vote, and unless something is done to ease their anger, next time they may vote for somebody who makes Trump look good by comparison.

But if the problem is unemployment, then the answer is not obvious, because the main cause of unemployment in Western countries is not immigration or ‘offshoring’ jobs, as Trump pretends. It is computers.

One-third of American manufacturing jobs have vanished in the past 20 years, and the vast majority of them (85 percent) were destroyed by automation. The algorithms and the robot arms have already killed the Rust Belt, and there is a plausible prediction that almost half of existing American jobs may be automated out of existence in the next 20 years.

What would our politics look like then? Not very democratic, unless we do something to ease the anger of the unemployed. This doesn’t just mean giving them more money – a massive expansion of the welfare state – but also finding way of taking the shame out of unemployment, because it is the humiliation of being seen as a loser that breeds the anger…

I’ve called this ‘mass disability’, because to me it’s a mismatch between the skills the majority of humans have and the skills needed to earn a middle class or better income.

I don’t have any other explanation for why the entire western world is simultaneously in crisis other than what I wrote about in 2010 - Globalization (China) and Information Technology.

See also:

Monday, March 26, 2018

Macintouch in the twilight

I’ve read Macintouch for decades. It’s been a living fossil for 15 of those years; Ric passed on RSS and blogs and feeds and permalinks. For a year or two he tried to get permalinks working — which made Macintouch potentially tweetable. Recently those went away, so I wasn’t surprised by today’s announcement …

Thirty years is a long time. The Macintosh computer debuted more than three decades ago, and I've been involved with this revolutionary system and the community around it since 1984. A lot has changed in the meantime. 

… I’ve been constantly engaged, inspired and supported by the MacInTouch community for all these many years, but I think it's time to do something different. 

The revenue that used to sustain MacInTouch has dropped below a viable business minimum, while a plethora of other websites, operating under different business and security models, produces constant Apple news, reviews and commentary.  

The MacInTouch Discussions forum is unique, as far as I know, but it's also unsustainably labor-intensive, and there's no way around that in its current incarnation. 

At this point, my plan is to continue running MacInTouch Discussions and home/news pages at a reduced intensity for a little longer.  But, before long, it will be time for a change - a sabbatical, a new blog, research, development, or something else – I'm not quite sure what yet, but I expect macintouch.com to continue in some form. 

What I am sure of is that I'm enormously grateful for the support, contributions and engagement of this remarkable community over the past three decades, something words can't adequately express. Thank you for all that, for all you’ve contributed, and let's see where the journey goes from here. 

Ric Ford
Editor/Publisher
MacInTouch Inc

I hope he finds a new way to publish and write. Like me the Macintouch community is old and curmudgeonly; it’s been a place that speaks truth and never falls for modern Apple’s too frequent cons.

Saturday, December 30, 2017

Tech regressions: MORE, Quicken, PalmOS, iOS, Podcasts, Aperture, Music, iPad photo slide shows, and toasters.

One of the odder experiences of aging is living through technology regressions. I’ve seen a few — solutions that go away and are never replaced.

Symantec’s classicMac MORE 3.1 was a great outliner/editing tool with the best style sheet implementation I’ve seen. It died around 1991. The closest thing today would be Omni Outliner — 16 years later. There’s still no comparable Style Sheet support.

Quicken for DOS with 3.5” monthly diskette records of credit card transactions was the most reliable and useable personal accounting tool I’ve experienced — though even it had problems with database corruption. I think that was the 1980s. Today I use Quicken for Mac, a niche product with unreliable transfer of financial information, questionable data security, and limited investment tools.

PalmOS Datebk 5 was an excellent calendaring tool with good desktop sync (for a while the Mac had the best ‘personal information management’ companion). That was in the 1990s. When PalmOS died we went years without an alternative. I briefly returned to using a Franklin Planner. Somewhere around year 3 of iOS we had equivalent functionality again — and a very painful transition.

iOS and macOS have seen particularly painful combinations of progressions and regressions. OS X / macOS photo management was at its best somewhere around the end of Snow Leopard and Aperture 3.1 (memory fuzzy, not sure they overlapped). OS X photo solutions had finally reached a good state after years of iPhoto screw-ups — the professional and home products more or less interoperated. All Apple needed to do was polish Aperture’s rough edges and fix bugs. Instead they sunset Aperture and gave us Photos.app — a big functional regression. Apple did something similar with iMovie; it’s much harder to make home “movies” than it once was.

iOS was at its most reliable around version 6. So Apple blew it up. Since that time Podcasts.app has gone from great to bad to not-so-bad to abysmal. The iPad used to have a great digital picture frame capability tied to screen lock — Apple took that away. For a while there was a 3rd party app that worked with iCloud photo streams, I could remotely add images to my father’s iPad slideshow digital picture frame. There’s nothing that works as well now; as I write this I’m working through a web of bugs and incompetence (I suspect a desperate timeout stuck into iTunes/iOS sync) to sneak some photos from Aperture to an iPad.

Apple Music is following the path of Podcasts.app as Apple moves to ending the sale of music (probably 2019). At the same time iTunes is being divided into dumbed down subunits (iBooks regression). The last 2-3 revisions of iTunes have been so bad that this feels almost like a mercy killing.

We don’t have a  way to avoid these regressions. Once we could have gotten off the train, now the train stations are dangerous neighborhoods of lethal malware. We need to keep upgrading, and so much is bundled with macOS and iOS that we can’t find 3rd party alternatives. Data lock is ubiquitous now.

I think regressions are less common outside digital world. It’s true toasters aren’t what they were, but since 2006 Chinese products have become better made and more reliable. Perhaps the closest thing to tech regressions in the material world is the chaos of pharma prices.

This takes a toll. There are so many better ways to spend my life, and too few minutes to waste. I wonder what these regressions do to non-geeks; I don’t think it goes well for them.

Friday, December 29, 2017

Did an autonomous Tesla kill its first cyclist?

This Nov 2017 crash hasn’t gotten enough attention …

Tesla Strikes and Kills UK Cyclist | Bicycling.com

… An 80-year-old man was killed Friday when a Tesla Model S, an electric car with some autonomous capabilities, struck him as he rode his bike near the U.K. village of High Shincliffe.

The cyclist—identified as Fred Heppell, a former bank manager from Lanchester, U.K.—was airlifted to a nearby hospital, where he later died. Initial news reports did not indicate if the driver faces any charges, although police are reportedly seeking eyewitnesses to the crash.

While not a fully self-driving vehicle, the Model S has an autopilot feature that allows the car to steer itself in certain circumstances. Promotional videos online show test drivers letting go of the steering wheel while the vehicle maintains speed and control on relative straightaways. (It tops out at 90 miles per hour in autopilot mode, according to the company website.) The car can also change lanes and park on its own.

It’s unclear if the driver in Friday’s crash had applied the autonomous technology at the time of the collision. U.K. reporters described the road where the crash occurred as “predominantly a straight road with gentle inclines.” Heppell was, by all accounts, an experienced rider.

“Fred averaged 10,000 miles per year on his bike and with his wife by his side had cycled across America, Australia, Argentina, Chile, New Zealand, and a host of European countries in his retirement years,” Heppell’s family told the British press…

I was unable to find any follow-up. I am skeptical of Tesla’s approach to autonomous vehicles: I think it is reckless. Any Tesla in autonomous or semi-autonomous mode should run a 360 video to ensure accidents are well understood and permits should not be issued without testing response to cyclists.

I think Google’s autonomous vehicles may, in time, be a boon to cyclists and pedestrians. Tesla, no so much.

Saturday, November 11, 2017

Taxing the externalities of the attention economy

The Economist has an excellent overview of the risks of the attention economy (11/4/17). The Gamergate connection is particularly good.

There is so much to say about all of the perverse consequences of funding the net through a tax on attention. I’m sure we don’t fully understand all of the implications; the reality may be even more grim than we know. It’s already grim enough though. So grim that the Russian assisted collapse of the US government has seized a fraction of our distracted attention.

It appears that most Americans are easily manipulated through modern meme-injectors like Facebook and Twitter. Vulnerability increases with lower education levels (among the privileged education is a rough proxy for cognition), but few are completely immune to distraction. We resemble a people who have never seen alcohol a few months after the whisky trade arrives.

If we believe the attention/ad funded economy is the mene equivalent of fentanyl or tobacco, what do we do about it? There are lessons from managing addictive and health destroying substances such as tobacco. It begins with with taxation.

We tax cigarettes heavily. We can similarly tax net advertising. Our goal should be to increase the cost of online advertising several fold. We raise the cost until few advertisers can afford it. At that point Facebook has to turn to other revenue sources to maintain services — such as charging a yearly fee to users.

This is obviously not sufficient, but it’s a beginning.

Tuesday, March 21, 2017

Broken world: applying for a minimum wage job via a corporate HR web site

My #1 son is a special needs adult. He’s excited to start at $10/hour job running food around a sports stadium. It’s work he can do — he’s got a great sense of direction and he is reasonably fit.

The job engagement process is run by an archaic corporate web site that looks like it was built for IE 3. The site claims to support Safari but warns against Chrome. It is not useable on a smartphone.

The HR process requires managing user credentials, navigating a complex 1990s style user interface, and working around errors made by the HR staff — who probably also struggle with the software. He would not have the proverbial snowball’s chance without my ability to assume his digital identity.

Sure, #1 is below the 5th percentile on standard cognition tests — but this would have been a challenge to the 15th percentile back in the 90s. In the modern era, where most non-college young people are primarily familiar with smartphones, this is a challenge to the 30th percentile.

Which means the people might want to do this job are being shut out by the HR software created to support the job. Which probably has something to do with this.

The world is broken.

#massdisability

Sunday, February 12, 2017

A change to my publishing streams

These are the last days of app.net. It ends in 3 days. App.net has been good to me. Great tools, smart and interesting people, RSS integration (albeit grudgingly at first) and the best communication/conversation platform I’ve worked with. App.net was what Twitter should have been. It has made clear how badly Twitter has failed.

These are, surprisingly, also the end times for Maciej’s Pinboard as a publishing platform. I have 33,236 pins there, and something is broken. Neither pinner.app or pushpin.app are able to connect and download the streams. From what Maciej has written in the past I may have passed some throttling limit. I’m using Pinboard in ways he did not intend — and the iOS apps don’t scale either. 

So I need to make some changes. Gordon’s Notes and Tech will stay on Blogger for now — it’s been surprisingly robust and trouble free for many years. It helps that there’s an exit strategy to WordPress if Google kills it, but as best I can tell Google is going to keep Blogger limping along.

My DreamHost wordpress microblog has mirrored my pinboard stream for years. I’m going to see if there’s a way to make it a true microblog publishing platform. I’m not sure how to do that, in particular it’s not obvious how I can conveniently post via Reeder.app. It will take me a while to work that out.

In the meantime there are several app.net spinoffs in the work. I’m tracking several of these as much as my time allows - especially pnut.io (@jgordon) and 10centuries (@jgordon). March 2017 is a heck of a month for me so things have gone slowly.

Uncertain times! For now if you follow kateva.org/sh you’ll see what I’m up to, no matter how the process evolves.

Saturday, December 31, 2016

Crisis-T: blame it on the iPhone (too)

It’s a human thing. Something insane happens and we try to figure out “why now?”. We did a lot of that in the fall of 2001. Today I looked back at some of what I wrote then. It’s somewhat unhinged — most of us were a bit nuts then. Most of what I wrote is best forgotten, but I still have a soft spot for this Nov 2001 diagram …

Model 20010911

I think some of it works for Nov 2016 too, particularly the belief/fact breakdown, the relative poverty, the cultural dislocation, the response to modernity and changing roles of women, and the role of communication technology. Demographic pressure and environmental degradation aren’t factors in Crisis-T though.

More than those common factors I’ve blamed Crisis-T on automation and globalization reducing the demand for non-elite labor (aka “mass disability”). That doesn’t account for the Russian infowar and fake news factors though (“Meme belief=facts” and “communications tech” in my old diagram). Why were they so apparently influential? 

Maybe we should blame the iPhone …

Why Trolls Won in 2016 Bryan Mengus, Gizmodo

… Edgar Welch, armed with multiple weapons, entered a DC pizzeria and fired, seeking to “investigate” the pizza gate conspiracy—the debunked theory that John Podesta and Hillary Clinton are the architects of a child sex-trafficking ring covertly headquartered in the nonexistent basement of the restaurant Comet Ping Pong. Egged on by conspiracy videos hosted on YouTube, and disinformation posted broadly across internet communities and social networks, Welch made the 350-mile drive filled with righteous purpose. A brief interview with the New York Times revealed that the shooter had only recently had internet installed in his home….

…. the earliest public incarnation of the internet—USENET—was populated mostly by academia. It also had little to no moderation. Each September, new college students would get easy access to the network, leading to an uptick in low-value posts which would taper off as the newbies got a sense for the culture of USENET’s various newsgroups. 1993 is immortalized as the Eternal September when AOL began to offer USENET to a flood of brand-new internet users, and overwhelmed by those who could finally afford access, that original USENET culture never bounced back.

Similarly, when Facebook was first founded in 2004, it was only available to Harvard students … The trend has remained fairly consistent: the wealthy, urban, and highly-educated are the first to benefit from and use new technologies while the poor, rural, and less educated lag behind. That margin has shrunk drastically since 2004, as cheaper computers and broadband access became attainable for most Americans.

…  the vast majority of internet users today do not come from the elite set. According to Pew Research, 63 percent of adults in the US used the internet in 2004. By 2015 that number had skyrocketed to 84 percent. Among the study’s conclusions were that, “the most pronounced growth has come among those in lower-income households and those with lower levels of educational attainment” …

… What we’re experiencing now is a huge influx of relatively new internet users—USENET’s Eternal September on an enormous scale—wrapped in political unrest.

“White Low-Income Non-College” (WLINC) and “non-elite” are politically correct [1] ways of speaking about the 40% of white Americans who have IQ scores below 100. It’s a population that was protected from net exposure until Apple introduced the first mass market computing device in June of 2007 — and Google and Facebook made mass market computing inexpensive and irresistible.

And so it has come to pass that in 2016 a population vulnerable to manipulation and yearning for the comfort of the mass movement has been dispossessed by technological change and empowered by the Facebook ad-funded manipulation engine.

So we can blame the iPhone too.

- fn -

[1] I think, for once, the term actually applies.

Saturday, December 17, 2016

Piketty's latest work on inequality is wrong about education.

The NYT has a readable summary of Thomas Piketty, Emmanuel Saez and Gabriel Zucman’s US income research. Much of it is familiar, but I was struck by this paragraph:

[since 1979] … Younger adults between 20 and 45 years old have seen their after-tax incomes flatline.

But over the same period, seniors in the bottom half have seen their after-tax incomes grow by over 70 percent. The bulk of that gain represents increased health care spending through Medicare.

Growth rates of a few percent a year do add up; health care is eating everything. Maybe it’s time to reread my old health care post.

Their findings are very important, but one of their recommendations falls flat (emphases mine) …

improving education and job training, equalizing distribution of human and financial capital, and increasing labor bargaining power, combined with a return to steeply progressive taxation

No, education and job training aren’t the answer. Roughly 40-50% of the US population has an IQ of less than 100. People with an IQ of under 100 have many skills, but they are not going to succeed in an academic program. Canada has the world’s highest “college” (includes 2 year vocational programs) graduation rate, and even they top out at around 56% of the population. I’m not sure why economists struggle with this basic arithmetic, my guess is they spend too much time with the cognitive elite.

What is the answer? We need to flip our thinking. We can’t change people to fit the work available in the natural post-industrial economy. We need to change the work to fit the humans. We need to incentivize work that is meaningful and rewarding across the cognitive spectrum. Germany did some of that by biasing their economy towards manufacturing. We can do some of that too (sorry Germany, that’s going to hurt you!), but we’re going to have to think more broadly. We’ll need to provide direct or indirect subsidies for work that’s productive even if it can’t compete with automation. We’ll have to apply work support lessons from the US military (long history of productive work across cognitive spectrum) and from traditional disability work support programs.