Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Tuesday, November 12, 2024

Mass disability measured: in 2016 40% of OECD workers could not manage basic technology tasks

It's hard to imagine how very simple much of our technology environment was in the 1980s. Much of what we interacted with was still understandable in simple mechanical terms. Early Mac OS Classic was vastly easier to understand and work with than anything we can image now; the closest analogy would be year two of the iPhone.

But even 8 years ago only a small slice of humanity could manage their technology environment (Jakob Nielsen from November 2016, thanks Matt Panaro for sharing.)

... The data was collected from 2011–2015 in 33 countries and was published in 2016 by the OECD... In total, 215,942 people were tested, with at least 5,000 participants in most countries... 

... research aimed to test the [job-related including technology] skills of people aged 16–65 ...

...participants were asked to perform 14 computer-based tasks. Instead of using live websites, the participants attempted the tasks on simulated software on the test facilitator’s computer. This allowed the researchers to make sure that all participants were confronted with the same level of difficulty across the years and enabled controlled translations of the user interfaces into each country’s local language ..

The tasks they chose were typical business worker tasks. The kinds of tasks that had lots of training back in the 90s. They never tried anything as tough as the HR software my legally disabled son has to work with at his minimum wage hourly job (we do it for him obviously).

Not all OECD members are equally wealthy but Nielsen reproduces a country-specific bar chart from the study. US numbers are not hugely different from the OECD averages; I've added US numbers based on squinting at the chart (numbers are done as levels, so as we move up the prior skills are assumed):

26% could not use a computer at all (US 20%)

14% could delete an email (US 15%)

29% could manage "reply all" or "Find all emails from John Smith" (US 35%)

5% could do "You want to know what percentage of the emails sent by John Smith last month were about sustainability." (US 5%)

They did not test the ability to maintain multiple malware-free home computers, iPhones, iPads and the like. I'd guess that's more like 0.1%.

Jakob Nielsen has been talking about "usability" as long as I can recall. So eight years ago he put this into the context of computer design and training. Which has been the mainstream interpretation of findings like this over the past 50 years of growing technological complexity. 

50 years is a good amount of time to wait for better software. Now we anticipate AI will monitor computer screens and guide users to complete tasks

But maybe better software won't help. After all, 40% of users couldn't do relatively simple software tasks. Maybe the problem is human cognitive limits.

I have coincidentally used that 40% number in a post rather similar to this the last time Trump won. Around the time that OECD study was published. Forty percent is my guesstimate of the percentage of Americans who cannot hope to earn the approximately $70,000 a year (plus benefits) needed to sustain a single adult in the low-end of American middle-class life. A life with some savings, yearly vacations, secure shelter, even a child or two.

Since at least 2008 I have called this global phenomenon mass disability. Every few years I see the basic concept emerge, typically with more euphemisms, only to be quickly forgotten. It's hard to fix anything when the the fix begins with a very unhappy truth. 

The unhappy truth is the complexity of our technological environment has exceeded the cognitive grasp of most humans. We now have an unsustainable mismatch between "middle-class" work and the cognitive talents of a large percentage of Americans.

There are things to do, some of which Biden started. We probably needed to have started on them back in the 90s. Perhaps Musk will have ideas. It's all on the oligarchs now.

Thursday, October 31, 2024

Autonomous vehicles: A prediction

I'm putting this down as a marker for me to look back to in a few years.

When autonomous vehicles are able to operate in Minnesota winters they will also be able to converse about quantum field theory, exotic mathematical geometries, politics in the Maldives, art history, and their latest contributions to classical music. 

Sunday, October 13, 2024

Muskism, Vance, AI and the American choice between oligarchy and an imperfect democracy

In advance of Nov 5 2024, some thoughts on  Muskism. Because I can't resist this kind of thing.

Musk and his fellow tech oligarchs more or less believe in the need to preserve a white tribe, eugenics, and the relative supremacy of a male pattern brain. But I don't think that's what is what is driving Musk's political agenda.

I think Musk is sincere about his stated AI beliefs. He expects at least the genius-in-every-pocket described by Anthropic CEO Dario Amodei. He believes American democracy will not survive the AI driven chaos to come. This challenge, Musk believes, will be far more disruptive than mere world war.

Musk and his tech allies believe that humanity's best chance is to be guided by a kind of aristocracy of wealth and (in his imagination) clear thinking intellect. Perhaps if we pass into a future AI golden age democracy might be restored, but that will depend on whether the SentientAsBestWeCanTell AIs of 2030 get the vote. The Muskites believe Trump is weak, dying, and will not finish a second term. He will be pardoned, given money and left to the side. Vance is their man. The tech oligarchs will steer America and the world through what lies ahead.

I believe Musk is correct about the AI chaos to come. In the old days we called this period the "technological singularity" -- back when singularity meant absence of prediction rather than a religious experience.

I don't, however,  think Musk and his billionaire minions are the answer to this challenge. I would prefer to give democracy a chance.

But does democracy really have a chance? November 2024 is the test. If Americans, knowing all they know now, having lived through a Trump presidency, still reelect Donald Trump -- then it's clear our democracy is not up to the much greater challenge of even collective climate geoengineering -- much less the AI transition.

If "Trump" (Vance really) wins in 2024 I will be sad. On the other hand, I will then accept that Musk was correct. If American democracy can't handle the Trump idiocy it will have demonstrated it is entirely insufficient for what lies ahead. In that case perhaps the oligarchy is the best we can do.

If Vance loses in a few weeks -- then we will see how democracy manages the chaos times. That's what I have voted for.

Friday, September 13, 2024

The End Times have come for the Pinboard.in bookmarking service

In the years following the Great Recession, from 2010 to 2013, many web services went offline. In retrospect that was the end of the Berners-Lee web.

During that time, starting in 12/29/2011, I started using Maciej CegÅ‚owski's Pinboard bookmarking site as a kind of micro blog. 

Pinboard filled part of the Google Reader Social vacuum. There were various apps and services around pinboard, in addition to IFTTT, that made that feasible. My Pinboard 's' posts were published to Twitter, then app.net, and most recently Mastodon (and probably a few other services too). They were also archived in kateva.org/sh

Pinboard imported my old Google Reader social bookmarks so it's a pretty complete set of things I shared, mostly tech and events that seemed to have potential lasting meaning. There are over 50,000 pins now. There were apps written for Pinboard, creating a small ecosystem of added value.

I'm still on my original subscription plan - about $20 a year or so. It ends in Feb 2025 and I won't be renewing. I feel like it's 2013 again.

Over the past few years there have been a slowly increasing number of pinboard outages with less communication. While debugging the last outage I purged my local history from the 3rd party Pins iOS app and found that Pinboard was throttling their download API. I could download only 100 of my 50,000 or so pins. (It's still easy to download the whole set as a file). 

That's ominous, but more importantly Pinboard is a one person show and that person is no longer responding to support emails. Maciej is no longer active on social media that I know of. His Pinboard.in support forum has been quiescent for years. I'll be researching my micro blog options and I'll write about what I come up with on tech.kateva.org. 

10 years is an eternity on the web. Pinboard had a good run, but it too is passing. I have my archives and you can still download JSON or HTM versions of past bookmarks. I might wish for a more graceful end but Pinboard was a good service while it lasted and there is a clear data exit if not a clear replacement. Thank you, Maciej, for the value you delivered to me.

Update: via a Mastodon friend a Hacker News article on the ignominious end of Pinboard and some alternatives: https://news.ycombinator.com/item?id=41533958

Wednesday, August 28, 2024

In which I declare my expert judgment on AI 2024

These days my social media experience is largely Mastodon. There's something to be said about a social network that's so irreparably geeky and so hard to work with that only a tiny slice of humanity can possibly participate (unless and until Threads integration actually works).

In my Mastodon corner of the "Fediverse', among the elite pundits I choose to read,  there's a vocal cohort that is firm in their conviction that "AI" hype is truly and entirely hype, and that the very term "AI" should not be used. That group would say that the main application of LLM technology is criming.

Based on my casual polling of my pundits there's a quieter cohort that is less confident. That group is anxious, but not only about the criming.

Somewhere, I am told, there is a third group that believes that godlike-AIs are coming in 2025. They may be mostly on Musk's network.

Over the past few months I think the discourse has shifted. The skeptics are less confident, and the godlike-AI cohort is likewise quieter as LLM based AI hits technical limits. 

The shifting discourse, and especially the apparent LLM technical limitations, mean I'm back to being in the murky middle of things. Where I usually sit. Somehow that compels me to write down what I think. Not because anyone will or should care [1], but because I write these posts mostly for myself and I like to look back and see how wrong I've been.

So, in Aug 2024, I think:
  1. I am less worried that the end of the world is around the corner. If we'd gotten one more qualitative advance in LLM or some other AI tech I'd be researching places to (hopelessly) run to.
  2. Every day I think of new things I would do if current LLM tech had access to my data and to common net services. These things don't require any fundamental advances but they do require ongoing iteration.  I don't have much confidence in Apple's capabilities any more, but maybe they can squeeze this out. I really, really, don't want to have to depend on Microsoft. Much less Google.
  3. Perplexity.ai is super valuable to me now and I'd pay up if they stopped giving it away. It's an order of magnitude better than Google search.
  4. The opportunities for crime are indeed immense. They may be part of what ends unmediated net access for most people. By far the best description of this world is a relatively minor subplot in Neal Stephenson's otherwise mixed 2018 novel "Fall".
  5. We seem to be replaying the 1995 dot com crash but faster and incrementally. That was a formative time in my life. It was a time when all the net hype was shown to be .... correct. Even as many lost their assets buying the losers.
  6. It will all be immensely stressful and disruptive and anxiety inducing even though we won't be doing godlike-AI for at least (phew) five more years.
  7. Many who are skeptical about the impact of our current technologies have a good understanding of LLM tech but a weak understanding of cognitive science. Humans are not as magical as they think.
- fn -

[1] I legitimately have deeper expertise here than most would imagine but it's ancient and esoteric.

Friday, August 16, 2024

Fantasies of a tyrant: The Deshittification Corps

This morning on my Mastodon:

It’s possible that we have built an economical and technical environment that is too complex and treacherous for almost all humans to live in.

Perhaps the Neanderthals had similar feelings about the warmer buggier more diseased environment that came with the Skinnies (their name for us). 

I wonder who will inherit our world.

Yes, I'm Old. But the young are distressed and anxious, so perhaps they feel it too. To me many things feel frayed, fragile, buggy, and poorly maintained. As though there wasn't enough time to do it right before the next urgent thing. 

Apple is supposedly off trying to build Apple Intelligence -- and in the meantime Photos.mac is the shittiest software I've been obliged to consistently use [1]. For a small fraction of what Apple vaporized on autonomous vehicles and the Vision Pro thing they could have slowed the growth rate of their technical debt and maybe even implemented fucking Folder search in Photos.mac [3].

Which brings me to the Deshittification Corps. If I were a Tyrant, which is more than you deserve, I would create a Deshittification Corps (DC) to fight enshittification [2]. My Deshittification Corps would be a force of about 10,000 people who evaluated the services we rely on and gave them shittification ratings. Which they would be obliged to public post, a bit like cigarette carton notices. For example:

This service has a Shittification Rating of D. This is a really shitty service. We recommend everyone who works for this company seek new employment.

Companies that didn't improve their Shittification rating would be subject to a special tax that would start at 0.01% of gross revenue and double every day ... 

- fn -

[1] Ok, SharePoint is shittier. But now you're triggering me. Besides, one of the best things about retirement is no more SharePoint.

[2] Yes, Doctorow was talking about software and online services, but I'm bending the meaning more broadly to encompass government and other services.

[3] It would still be shitty software, but that would be the biggest improvement since iPhoto stabilized.

Thursday, July 11, 2024

The LLM service I will pay for -- call Social Security for me

One of the fun things that happens to Americans as we become redundant to life's requirements is signing up for Medicare. There's a sort-of-useful cobbled together web site to do this. Processing is supposed to take under 30 days, though I've read the federal mandate is 45 days. Perplexity basically says it's heading towards 60 days average.

Anyway, my wee application is well over the 30 day limit. There's no way to contact anyone other than the phone. Which my wife assures me takes at least 45 minutes on hold. (Don't fall for the "call back" and "hold your place in line option" -- my wife tells me they simply don't bother.)

And, yes, the hold music is horrendous. As Emily says: "One of the challenges of getting old is listening to music on hold. No one ever tells us."

So, while I wait on hold I once again think how there's one LLM service I want to pay for. Want.

I want to give my Agent the social security and medicare data it is likely to such; case number, my SSN, my phone, etc.  I want it to call social security using my voice and sit on hold for days, weeks, years until someone accidentally answers. Then it begins the conversation while paging me to swap in .... with a text summary of current discussion and a timer to join in 5.... 4..... 3.... 2.... 1....

Yeah, that would be worth some money.

Update 7/19/2024: I finally got through to be told that requests were mailed to me 6/3 and 7/3 requesting additional information. We are very vigilant about social security correspondence so it's very unlikely they were delivered here. We have seen MN Post Offices lose tracked social security correspondence, presumably due to internal theft.

Friday, April 05, 2024

Apple antitrust: Dreaming of freedom for photos

In early 2024 the American DOJ sued Apple for an illegal monopoly over the smartphone market. I agree with the thrust of the suit. Apple may not have a conventional monopoly, but for a customer like me switching costs are high. The data lock is strong. Apple feels like a de facto monopoly.

I don't know how the suit will evolve over the next 5-10 years of courtroom work. Somewhere along the line I hope that it produces more competition within the Apple ecosystem. In particular it would be rather nice if the courts decide that Apple uses Photos lock-in as a part of its monopoly.

I'm not betting on this happening though. Very few people seem to care about images that are more than a week old and almost nobody does any photo organization or annotation. Apple's Photos products have been deprecating annotations since iPhoto quietly dropped text descriptions of named photo albums. The current version of Photos.mac doesn't even support searches on folder names and Photos.ios can't view or change photo titles seen in Photos.mac

But ... let's say a miracle occurs. Here are two ways that Apple could free photo management from their iron control and provide options for the tiny sliver of the Apple base that cares.

  1. Apple could define an interchange format for digital photo collections. Aperture Exporter showed the way. It's not that hard --  original image, high res archival version, edited version, XMP metadata, folder/album structures, edit directions if possible. All file based and the collection is browsable in a web browser and well documented. Other vendors can import from it.
  2. Make PhotoKit API the only way for Photos to interact with iCloud and make it entirely public. (Current PhotoKit is very limited and the most interesting parts are not public.) A vendor could then greatly extend or replace Photos.mac. I'd pay in blood.
None of this will ever happen.

Tuesday, January 23, 2024

Why we can't have good personal finance software any more

This Nov 2023 blog post from the CEO of a budget/financial management software firm (Monarch) tells us a lot about why we don't have alternatives to Intuit ...

... personal finance apps, which typically rely on data aggregators (Plaid, Finicity, etc) to connect to tens of thousands of financial institutions to aggregate the necessary financial data. These data fees are quite expensive, which means a personal finance app is losing money on each free user and must make it up in some other manner...

... Personal finance apps are only as useful as their underlying data. As mentioned above, keeping this data up-to-date is a massive and expensive challenge that everyone underestimates. Subscription-based services are incentivized to constantly invest in this data architecture; otherwise, customers churn...

... Unfortunately, no single data aggregator provides complete coverage of all financial institutions. So we have integrated with all of them at Monarch. What’s more, we’ve spent years (and millions of dollars) building an intelligent data infrastructure that can route users to the best aggregator for a given financial institution. We’ve also invested heavily in AI-based transaction cleansing and classification. I believe we have the best financial data infrastructure that has ever been built for this use case. In full transparency, this is an ever-shifting landscape and there are still a few large institutions that don’t want to share their data, so our coverage is not yet 100%. We plan to get there eventually...

Intuit got the relationships early and has some leverage over banks (which seem to be normally greedy but extraordinarily incompetent). Everyone else is at the mercy of the aggregators. An evil (or just profitable) dominant vendor might spend quite a bit of money to keep this moat as deep and merciless as possible.

Sunday, January 07, 2024

Quicken for DOS cannot be recreated: Why we can't have good personal finance software any more.

Almost 40 years ago we used Quicken version 2 or 3 for DOS 3.1 on a Panasonic 8086 with 640K of memory and a CPU too feeble for a modern toaster. 

Every month a 3.5" (not 5.25") diskette came in the mail with our bank and credit card transactions. We loaded that into Quicken. We entered cash transactions manually. It worked pretty well, though Quicken was plagued with database corruption bugs until the 90s. When Microsoft Money appeared one could migrate transactions and history from one to the other.

There's no modern equivalent. Today's vendors sell our data to third parties and then market products to us. Vendors have a hard lock-in. This kind of service decay is now known as "enshittification". Today in a mastodon thread I listed what drove that enshittification*:

  1. The banks feared disintermediation and commodification so they stopped cooperating and/or raised transaction costs. 
  2. Selling services to customers and selling customer data were both seemingly painless ways to increase margins for a publicly traded company
  3. Costs and user experience both favor user data in the cloud — which aligns with selling user data and services.
  4. Customer data lock strategies became irresistible and with cloud migration they were easy to implement.
Of these the first is the big one. If customers could get their data then small vendors could produce niche subscription products. But the banks aren't going to cooperate. They know better now.

I don't know if we'll ever see good products again. Perhaps if Apple or Microsoft went into banking they'd provide an API for developers to use. Of course we'd all have to use Apple's Bank for everything but, speaking for my family, they already own us.

*With two 't's per Doctorow.

Is it possible to have too many wizards in software development?

Once upon a time, long ago, a middle manager (D.P.) in a tech org told me she didn't want her team to be made up only of wizards. I don't recall her exact words, but the essence was that developing quality products takes a range of skills. A wizard or two is useful, but much of the work is methodical and tedious and repetitive. A wizard will get bored and restless. A methodical, disciplined, and reliable non-wizard who enjoys or tolerates the less celestial work will be happier and more productive with many tasks. 

Basically, the tasks of producing and maintaining quality software products requires a range of skills and talents and temperaments. Sometimes you need a carpenter, sometimes you need a finisher.

I realized today that I've only ever heard that from her. It seems Google and Microsoft only hire wizards. So maybe she was wrong, but I've thought of it often.

Wednesday, August 30, 2023

Mass disability - dysfunctional web sites, apps for everything

I last wrote about "mass disability" and the Left Behind in a 2021 post. The concept has sometimes seemed on the edge of going mainstream but it's never quite made it. Maybe we're getting closer; a recent Michael Tsai post (No App, No entry) reminded me of my Mastodon thread from a few weeks ago:

What is the crazy that drives Trumpism and the many global equivalents?
It is that the minimal IQ to function well in the modern world is now about 120 and that eliminates most people.

This is both the most important fact of our time and the least palatable. It is the thing that cannot be said and it will be ruin of us if we don't say it ...

I've been saying this for years. Today I was reminded of it while doing some travel booking.
During the bookings I encountered:
1. A web site that didn't work with my older version of Safari (I knew what was wrong and switched to Chrome. 
2. A Delta web site bug (I recognized it as a bug and knew what to do). 
3. Place that was out of rental cards but I new Expedia would have some contracts that would let me find one. 
4. Travel web sites that all needed new credentials...
... These are all routine parts of modern life including maintaining flaky computer systems (let me tell you ...) and phones ...

It was not like this even 35 y ago. Travel agents handled travel complexity. There were no smartphones. Computers were very limited. There was no internet for most. By necessity everyday life was much simpler. Most people could cope with it.

Now most cannot cope.

This is the most important feature of our time. And nobody can talk about it. 

I remember some good discussions on this thread but I can't find any of them now. Perhaps by design Mastodon has a limited memory. (My home instance has no search, so I had to download my archive and search it to find the date of the post. Then I could slowly navigate to it.)

I expanded on the theme a bit later:

Hotel laundry year 2000

1. Insert quarters to buy detergent, operate washer and dryer.

IQ requirement: 65 (my son could do this after a demonstration)

Hotel laundry year 2023

1. Scan QR code to download app whose profit comes from unspent funds.

2. Install app, create account with Apple ID

3. Figure out cryptic UX so can deposit funds (several odd unintuitive steps)

3. Deposit funds, paying just enough to cover this sesh. Pat the 25 cent low transaction penalty... 

4. Spot the scam behind app and avoid it (eg find at minimum)

5. Diagnose why after paying money and confirming machine it’s still not working

6. Authorize specific transaction

7. Start laundry.

(My son could not do this)

8. When complete delete app. 

IQ requirement: minimum 110, higher to spot the scam. 

This is why America is burning.

People are scared and angry and feeling left behind -- and they can't come out and say they are unable to manage their tech. Because that's the ultimate shame.

See also:

Wednesday, May 10, 2023

Home sleep monitoring with Apple Watch and Google (Nest) Home Video in adult with limited ability to describe a nocturnal health issue

I'm changing the details here for privacy reasons, but sharing some technical setup advice.

A special needs adult with limited ability to give reliable history has a medical issue under complex active evaluation. Since the issue often manifests during sleep it has been difficult to observe. This person's parents/guardians are both physicians.

Valuable results have been obtained with a combination of an Apple Watch (already owned), Apple Health App, Heart Reports for iPhoneGoogle Nest Cam (already owned), Google Home app, and a Nest Aware subscription ($12/month for 7 days of continuous monitoring -- camera comes with a 30 day free trial.)

When you set up the camera you need to turn on the continuous video AND (if desired) the continuous audio. The Nest cam's green "active video" notification light may disturb sleep; it cannot be turned off (feature removed 2022) but you can turn down the intensity and cover it with tape.

The patient wears the Apple Watch during sleep. The following day one can review the Apple Watch report using either the native Health App or, as many will prefer, the 3rd party "Heart" Reports App. That last product outputs a wide variety of reports as PDF that can be shared with a healthcare provider; it's $4 one-time and supports family sharing.

With the Apple Watch report and the Nest Aware automated it's possible to scroll through an evening's sleep fairly quickly with special attention to Apple Watch awakening events and Nest Aware event detection.

After a few days of study the Nest Aware subscription can be disabled. The Health app and Video can be shared with providers if needed.

This cannot replace a formal sleep lab observation study but those are rarely done now and are very expensive. This method had a total cost of $16 (less actually since the free trial was active). The technique may be particularly useful for persons whose ability to give history is limited. It may also be useful when sleep labs are not affordable or available.

UPDATE 5/15/2023. We concluded our clinical evaluation and discontinued the trial subscription. Although the video record was very valuable, and although the hardware performed very well, I was unimpressed with Google Home software. In particular:

  1. The iPhone History view really only works in portrait mode, in landscape mode the bottom part of the image doesn't render well. Navigating the history is tedious and the playback can get stuck. Sometimes I had to quit and restart.
  2. The web view is much more limited than the older Nest software and has NO support for history review at all.

Friday, April 28, 2023

Large Language Models - evolutionary tree and selection algorithm 2023

The following two diagrams were taken from Mooler et al's GitHub LLM guide (April 2023). I'm posting them here as a snapshot of this historic moment. The root of the LLM evolutionary tree goes far back in time, almost five years ago. Click image for full res.

Algorithm for selecting an AI technology


LLM Evolutionary Tree

Friday, April 21, 2023

Prosumer digital image management has not progressed to our advantage

Every so often the software market fails. I've had this happen to me a few times. From 1997 to 2007 I used a variety of PalmOS devices for what we used to call "personal information management" (PIM) - including Contacts, Notes, Tasks and Calendaring. My many PalmPilot/Palm handheld stylus devices synchronized by cable connection with Palm desktop software.

PalmOS died around the time the first iPhone came out. That original iPhone was both revolutionary and crappy. Functionally it was a huge regression from PalmOS Calendaring and other PIM solutions, but it was immediately clear that the iPhone was the future (seriously, there were no honest skeptics). Palm had been ailing already, but at that moment it was utterly dead.

It took three years for the iPhone to develop useable solutions for the "PIM-4" that worked across devices (often using either Google or Microsoft Exchange). During that time I had no handheld solution; I returned to using a paper Franklin planner. Finally, in 2010 or so, I was able to transition to the iPhone and iOS.

The market failure of digital image (and video) management has lasted longer and there's no end in site. This means something.

Things were actually looking pretty good for image and video management in 2015. Apple had consumer (iPhoto) and prosumer/professional (Aperture) applications that (mostly) shared the same image library. Things were not perfect -- Aperture had had years of horrible bugs and performance issues, but in retrospect this was a golden age. SSDs were fixing the iPhoto/Aperture performance issues and there were several reasonably priced alternatives including Adobe Lightroom. We didn't know how well we had it.

And then 2015 was when Apple killed both Aperture and iPhoto. There was no replacement for Aperture; users were left stranded with limited ability to migrate to another platform. Photos replaced iPhoto, but in most ways it was a functional regression. There was only one Photos advantage -- it promised a cloud-centric approach to image management with some limited backup features. If your iPhone or laptop was lost or destroyed your Apple Cloud images were probably safe -- as long as you paid for storage or didn't get locked out of iCloud by a phone thief.

Several alternative prosumer image management solutions emerged. But they all had the same problem Aperture had -- they all had severe data lock. If the software were to be discontinued, as happens to most products, there would be no way to extract one's images, image edits, and image metadata (ratings, keywords, titles, descriptions, albums, and on and on). In addition, perhaps inspired by the power of this data lock, many vendors moved to a subscription model. Adobe Lightroom now costs $120 a year, if you don't pay your photo library is essentially dead. Adobe can, if they wish, double or triple that price and customers will simply have to pay up. (I don't know what happens to the image library when a subscriber dies.)

I hoped Apple Photos would mature and develop more advanced features, but it has essentially languished. Recently Apple introduced a "Shared Library" model that is complex to use and, in my experience, has weird bugs and permission problems. (Lesson to users - if you ask Apple for something be prepared to regret your request.)

Eight years after Aperture died there still is not a great prosumer photo management solution for macOS customers. All the options have Hotel California Syndrome -- you can check-in but you can never leave. Apple's only option, the most natural fit for a macOS users, is dreadful and may be deteriorating. Many choices are subscription based and it's very easy for vendors to raise costs.

It's not hard to create a new standards and file based photo management solution. The file system does much of the work. Adobe has an open specification for image metadata management (XMP). Image to album, project, folder relationships are simple row triples. We've known how integrate external image editors for decades [1].

It's not hard ... but it hasn't happened. No vendor has decided to disrupt the marketplace and no open source (really open data structure is what we care about) solution has emerged.

Why not?

My best guess is that the Cloud is the problem. We've only gradually learned how to build responsive synchronizing Cloud products and they are not intrinsically file based. Development is much more challenging and the data lock advantage is irresistible for incumbents.

In the absence of a decent solution vendors are starting to build around the Apple Photos framework. This week Power Photos has a migration and access project. CYME Peakto is some mixture of Photos extension and standalone management solution. Houdah Photos Workbench adds a minuscule number of missing features to Photos.app. I can sort of imagine who these products might work, but Photos is a terrible foundation on which to build.

It's easy to image ways Apple could help, but they've been butchering photo management for a long time. They appear to be broken. The more realistic hope is that it will become easier for open source and other vendors to implement a standards based Cloud solution that would allow library migration between cooperating vendors - either through direct Cloud-Cloud communication or (better) a file based interchange format (what's a TB or two between friends?). I would be happy to pay a $200/year subscription fee for that kind of data freedom solution.

I've spent 7-8 years sitting on Mojave preparing to migrate to Apple Photos. The more I use Apple Photos the less I like this idea.  At this point I expect to convert my beloved 2015 MacBook Air to a non-networked Aperture machine and purchase a new M2 machine for my other work. Since Ventura Photos.app no longer supports importing Aperture Libraries. I'll be looking for other migration options over the next one to two years. Maybe some vendor will decide to disrupt the data-lock. In the meanwhile I'll test Power Photos migration by periodically migrating my Aperture library to Photos.

[1] For each image store original, the proprietary image editor non-destructive edit recipe, and the most recent edited version in a user-defined format (lossy or lossless). If the editor is or changed the edit recipe is useless, bu the edited version is good.

Monday, April 03, 2023

We need a new word for the historical singularity.

TLDR: The "technological singularity" was an important and useful term with a clear meaning. Then it became the "Rapture of the Nerds". We need a new term.

--

I first heard the word "singularity" in the context of black hole physics; it dates back at least to the early 20th century:

ChatGPT 4 2023: "At the singularity, the laws of physics as we know them, including space and time, break down, and our current understanding of the universe is insufficient to predict what happens within it."

Not much later, in the 1950s, the term was applied by von Neumann in a technological context (from a 1993 Vernor Vinge essay):

Stan Ulam paraphrased John von Neumann as saying: "One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."

Brad Delong used to write about this kind of non-AI historical singularity. My favorite description of what it would be like to a approach at technological singularity was Vinge's short story "Fast Times at Fairmount High". (This prescient story appears to be lost to time; he wrote a similar full length novel but I think the short story was better).

The core idea is there's a (virtuous?) recursive loop where technology improves technology with shorter and shorter cycle times. Many processes go exponential and even near term developments become unpredictable. One may assume social end economic structures train to keep pace. The historical singularity exponential curve was part of The Economist's y2K Millennium issue GDP per person historical graph:


In a January 1983 essay for Omni Magazine Vinge focused on a particular aspect of the the technological singularity arising from superhuman intelligence (aka "super intelligence"):

We will soon create intelligences greater than our own ... When this happens there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole" 

A decade later, in his 1993 essay later published in Whole Earth Review (non-Olds cannot imagine what Whole Earth Review was like), Vinge revised what he meant by "soon":

... Based on this trend, I believe that the creation of greater-than-human intelligence will occur during the next thirty years. (Charles Platt has pointed out that AI enthusiasts have been making claims like this for thirty years. Just so I'm not guilty of a relative-time ambiguity, let me be more specific: I'll be surprised if this event occurs before 2005 or after 2030.) ...

So by the year 2000 we had the concept of a historical technological singularity (eminently sensible) that had become focused on a specific kind of self-improving technology - the superhuman intelligence with an upper-case S Singularity (presumably AI). Those were useful concepts - "technological singularity" and "superintelligence" Singularity. 

In 1993 Vinge predicted the Singularity would happen before 2030, later experts like Scott Aaronson predicted after 2080. (Aaronson has since revised that prediction and works for OpenAI; Vinge's 2030 dates looks pretty good.)

After 2000 though the word Singularity went off the semantic rails. It came to be used for for a peculiar future state in which human minds were uploaded into simulation environments that were usually described as pleasant rather than hellish. This is, of course, antithetical to the original idea of unpredictability! This peculiar meaning of "The Singularity" came to be known as "The Rapture of the Nerds" based on the title of a book by Charles Stross and Cory Doctorow. More recently that vision underlies a weird cult called longtermism that seems to have infected some vulnerable minds.

The "Rapture" meaning of "Singularity" has effectively taken over. We no longer have a term for the original von Neumann concept of self-reinforcing technological change that makes even the near term future unpredictable. That perversion of meaning is why I use the tag "Whitewater world" rather than Singularity for my own blog posts (and "Skynet" rather than super intelligence).

So we need a new term. I don't think "whitewater world" is it.

Anyone have a new term? We probably have a few years in which it might still be useful.

Thursday, March 30, 2023

ChatGPT's 155 IQ and passing the Sebastian's children puzzle

A Scientific American contributor found ChatGPT (probably 3) scored 155 on a standard IQ test:

Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155, superior to 99.9 percent of the test takers who make up the American WAIS III standardization sample of 2,450 people. As the chatbot lacks the requisite eyes, ears and hands, it is not able to take WAIS’s nonverbal subtests. But the Verbal IQ and Full Scale IQ scales are highly correlated in the standardization sample, so ChatGPT appears to be very intelligent by any human standards.

We don't need to worry though because it couldn't answer this question:

ChatGPT easily fails at obvious riddles, such as “What is the first name of the father of Sebastian’s children?” (ChatGPT on March 21: I’m sorry, I cannot answer this question as I do not have enough context to identify which Sebastian you are referring to.) It seems that ChatGPT fails to reason logically and tries to rely on its vast database of “Sebastian” facts mentioned in online texts.

This is what happened when I tried the Sebastian test on GPT-4:

Ah, I understand now. This riddle is a classic play on words. The answer is "Sebastian" himself, as he is the father of his children.

Ok, so that's reassuring.

Update 4/12/2023: ChatGPT4 (C4) recently did well in some blinded advanced undergraduate exams.

Tuesday, March 14, 2023

Everything is just fine.

The ad-funded internet is collapsing. It's been ailing for years but, as seems to the rule, the last bit is happening quite quickly. I'm guessing advertisers, who must be kind of dim, finally realized they were wasting their money. They did much better funding special interest magazines but those are gone now. Much of Silicon Valley has been funded by a now defunct business model. So maybe this is not a good time to be there. (Or, if you are young and have some money, maybe this is the best time to be there.)

Meanwhile human-cognition level AI has arrived. About 60 years earlier than I'd hoped. And some people thought "Don't Look Up" was about climate change. As a guide I recommend the movie Superintelligence; particularly the protagonist's approach to the end of the world. (Spoiler: It's a rom-com. Her world does not end.)

It's whitewater world all over now. On the bright side, we don't need to waste too much time on longterm planning. I retired in August of 2022 so I think I'll focus on mountain bike trail maintenance and, like Superintelligence's Carol Peters, on the needs of friends and family.

Sunday, March 12, 2023

I ask ChatGPT about the next step.

 Not a bad response.

”Describe what happens when ChatGPT is enhanced with natural language processing and domain-specific knowledge modules.”

--


When ChatGPT is enhanced with natural language processing and domain-specific knowledge modules, it gains several capabilities that allow it to better understand and respond to user input.


Natural language processing (NLP) is a field of artificial intelligence that focuses on enabling computers to understand and interpret human language. By incorporating NLP into ChatGPT, the model can better analyze and interpret user input, including understanding sentence structure, recognizing named entities, and determining the meaning of words in context. This allows ChatGPT to generate more accurate and relevant responses to user queries.


Domain-specific knowledge modules provide ChatGPT with specialized knowledge related to a particular field or topic. For example, if ChatGPT is enhanced with a medical knowledge module, it can better understand medical terminology and provide more accurate and relevant responses to medical-related queries. By incorporating domain-specific knowledge modules, ChatGPT can tailor its responses to specific user needs, making it a more effective conversational agent.


Together, these enhancements enable ChatGPT to better understand and respond to user input, providing more accurate and relevant information to users. This can lead to a more engaging and satisfying user experience, as well as improved productivity and efficiency in a wide range of applications.”

Monday, February 20, 2023

Be afraid of ChatGPT

TL;DR: It's not that ChatGPT is miraculous, it's that cognitive science research suggests human cognition is also not miraculous.

"Those early airplanes were nothing compared to our pigeon-powered flight technology!"

https://chat.openai.com/chat - "Write a funny but profound sentence about what pigeons thought of early airplanes"

Relax

Be Afraid

ChatGPT is just a fancy autocomplete.

Much of human language generation may be a fancy autocomplete.

ChatGPT confabulates.

Humans with cognitive disabilities routinely confabulate and under enough stress most humans will confabulate. 

ChatGPT can’t do arithmetic.

IF a monitoring system can detect a question involves arithmetic or mathematics it can invoke a math system*.


UPDATE: 2 hours after writing this I read that this has been done.

ChatGPT’s knowledge base is faulty.

ChatGPT’s knowledge base is vastly larger than that of most humans and it will quickly improve.

ChatGPT doesn’t have explicit goals other than a design goal to emulate human interaction.

Other goals can be implemented.

We don’t know how to emulate the integration layer humans use to coordinate input from disparate neural networks and negotiate conflicts.

*I don't know the status of such an integration layer. It may already have been built. If not it may take years or decades -- but probably not many decades.

We can’t even get AI to drive a car, so we shouldn’t worry about this.

It’s likely that driving a car basically requires near-human cognitive abilities. The car test isn’t reassuring.

ChatGPT isn’t conscious.

Are you conscious? Tell me what consciousness is.

ChatGPT doesn’t have a soul.

Show me your soul.

Relax - I'm bad at predictions. In 1945 I would have said it was impossible, barring celestial intervention, for humanity to go 75 years without nuclear war.


See also:

  • All posts tagged as skynet
  • Scott Aaronson and the case against strong AI (2008). At that time Aaronson felt a sentient AI was sometime after 2100. Fifteen years later (Jan 2023) Scott is working for OpenAI (ChatGPT). Emphases mine: "I’m now working at one of the world’s leading AI companies ... that company has already created GPT, an AI with a good fraction of the fantastical verbal abilities shown by M3GAN in the movie ... that AI will gain many of the remaining abilities in years rather than decades, and .. my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world."
  • Imagining the Singularity - in 1965 (2009 post.  Mathematician I.J. Good warned of an "intelligence explosion" in 1965. "Irving John ("I.J."; "Jack") Good (9 December 1916 – 5 April 2009)[1][2] was a British statistician who worked as a cryptologist at Bletchley Park."
  • The Thoughtful Slime Mold (2008). We don't fly like bird's fly.
  • Fermi Paradox resolutions (2000)
  • Against superhuman AI: in 2019 I felt reassured.
  • Mass disability (2012) - what happens as more work is done best by non-humans. This post mentions Clark Goble, an app.net conservative I miss quite often. He died young.
  • Phishing with the post-Turing avatar (2010). I was thinking 2050 but now 2025 is more likely.
  • Rat brain flies plane (2004). I've often wondered what happened to that work.
  • Cat brain simulator (2009). "I used to say that the day we had a computer roughly as smart as a hamster would be a good day to take the family on the holiday you've always dreamed of."
  • Slouching towards Skynet (2007). Theories on the evolution of cognition often involve aspects of deception including detection and deceit.
  • IEEE Singularity Issue (2008). Widespread mockery of the Singularity idea followed.
  • Bill Joy - Why the Future Doesn't Need Us (2000). See also Wikipedia summary. I'd love to see him revisit this essay but, again, he was widely mocked.
  • Google AI in 2030? (2007) A 2007 prediction by Peter Norvig that we'd have strong AI around 2030. That ... is looking possible.
  • Google's IQ boost (2009) Not directly related to this topic but reassurance that I'm bad at prediction. Google went to shit after 2009.
  • Skynet cometh (2009). Humor.
  • Personal note - in 1979 or so John Hopfield excitedly described his work in neural networks to me. My memory is poor but I think we were outdoors at the Caltech campus. I have no recollection of why we were speaking, maybe I'd attended a talk of his. A few weeks later I incorporated his explanations into a Caltech class I taught to local high school students on Saturday mornings. Hopfield would be about 90 if he's still alive. If he's avoided dementia it would be interesting to ask him what he thinks.