Showing posts with label brain and mind. Show all posts
Showing posts with label brain and mind. Show all posts

Wednesday, November 06, 2024

Chaos times: American oligarchy

1. I was right about polling being worthless

2. At least Biden was spared humiliation 

3. Americans chose oligarchy willingly. 

4. Our feeble democracy wasn’t going to survive AGI (if we get it)

5. I think the inability of a large number of men and women to meet the always increasing IQ/EQ requirements needed for a middle-class life is the root cause. #massDisability

Now we enter the chaos times.

Friday, September 20, 2024

Perplexity is saving my linguistics classmates

I have a dark past. I asked questions. In class. Lots of questions. Too many questions. I hear things, I get ideas, I notice gaps, I ask questions.

It's a compulsion.

Some of the questions helped classmates. To be honest more were probably confusing or distracting. I likely featured in classmate daydreams -- but not in a good way.

Worse, some of the questions confused the professor. Or exposed what they didn't understand. That could be embarrassing or even humiliating.

Now I'm back in the classroom, doing freshman linguistics.  As a 65yo, I can do classes at Minnesota state colleges and universities for free. We pay a lot in taxes, but there are benefits to living here.

My question compulsion is still there, but LLMs are saving everyone. I set up a linguistics "collection" in Perplexity with appropriate prompts; now I type my questions into my phone (allowed in class). I get the answer with Perplexity and spare my classmates.

Never say AI isn't good for something.

PS. Perplexity is to modern Google as Google was to Alta Vista. A qualitative improvement. It's almost as good as 1990s Google.



Wednesday, August 28, 2024

In which I declare my expert judgment on AI 2024

These days my social media experience is largely Mastodon. There's something to be said about a social network that's so irreparably geeky and so hard to work with that only a tiny slice of humanity can possibly participate (unless and until Threads integration actually works).

In my Mastodon corner of the "Fediverse', among the elite pundits I choose to read,  there's a vocal cohort that is firm in their conviction that "AI" hype is truly and entirely hype, and that the very term "AI" should not be used. That group would say that the main application of LLM technology is criming.

Based on my casual polling of my pundits there's a quieter cohort that is less confident. That group is anxious, but not only about the criming.

Somewhere, I am told, there is a third group that believes that godlike-AIs are coming in 2025. They may be mostly on Musk's network.

Over the past few months I think the discourse has shifted. The skeptics are less confident, and the godlike-AI cohort is likewise quieter as LLM based AI hits technical limits. 

The shifting discourse, and especially the apparent LLM technical limitations, mean I'm back to being in the murky middle of things. Where I usually sit. Somehow that compels me to write down what I think. Not because anyone will or should care [1], but because I write these posts mostly for myself and I like to look back and see how wrong I've been.

So, in Aug 2024, I think:
  1. I am less worried that the end of the world is around the corner. If we'd gotten one more qualitative advance in LLM or some other AI tech I'd be researching places to (hopelessly) run to.
  2. Every day I think of new things I would do if current LLM tech had access to my data and to common net services. These things don't require any fundamental advances but they do require ongoing iteration.  I don't have much confidence in Apple's capabilities any more, but maybe they can squeeze this out. I really, really, don't want to have to depend on Microsoft. Much less Google.
  3. Perplexity.ai is super valuable to me now and I'd pay up if they stopped giving it away. It's an order of magnitude better than Google search.
  4. The opportunities for crime are indeed immense. They may be part of what ends unmediated net access for most people. By far the best description of this world is a relatively minor subplot in Neal Stephenson's otherwise mixed 2018 novel "Fall".
  5. We seem to be replaying the 1995 dot com crash but faster and incrementally. That was a formative time in my life. It was a time when all the net hype was shown to be .... correct. Even as many lost their assets buying the losers.
  6. It will all be immensely stressful and disruptive and anxiety inducing even though we won't be doing godlike-AI for at least (phew) five more years.
  7. Many who are skeptical about the impact of our current technologies have a good understanding of LLM tech but a weak understanding of cognitive science. Humans are not as magical as they think.
- fn -

[1] I legitimately have deeper expertise here than most would imagine but it's ancient and esoteric.

Thursday, July 11, 2024

The LLM service I will pay for -- call Social Security for me

One of the fun things that happens to Americans as we become redundant to life's requirements is signing up for Medicare. There's a sort-of-useful cobbled together web site to do this. Processing is supposed to take under 30 days, though I've read the federal mandate is 45 days. Perplexity basically says it's heading towards 60 days average.

Anyway, my wee application is well over the 30 day limit. There's no way to contact anyone other than the phone. Which my wife assures me takes at least 45 minutes on hold. (Don't fall for the "call back" and "hold your place in line option" -- my wife tells me they simply don't bother.)

And, yes, the hold music is horrendous. As Emily says: "One of the challenges of getting old is listening to music on hold. No one ever tells us."

So, while I wait on hold I once again think how there's one LLM service I want to pay for. Want.

I want to give my Agent the social security and medicare data it is likely to such; case number, my SSN, my phone, etc.  I want it to call social security using my voice and sit on hold for days, weeks, years until someone accidentally answers. Then it begins the conversation while paging me to swap in .... with a text summary of current discussion and a timer to join in 5.... 4..... 3.... 2.... 1....

Yeah, that would be worth some money.

Update 7/19/2024: I finally got through to be told that requests were mailed to me 6/3 and 7/3 requesting additional information. We are very vigilant about social security correspondence so it's very unlikely they were delivered here. We have seen MN Post Offices lose tracked social security correspondence, presumably due to internal theft.

Thursday, March 30, 2023

A response to Scott Aaronson's rejection of an AI pause.

Scott Aaronson, who works on AI safety for OpenAI, wrote a critique of AI Pause that was not up to his usual standards. Here's what I wrote as a comment:

Hi Scott — I was confused by your post. I’m usually able to follow them. I won’t defend the letter directly and Yudkowsky/TIME is not worth a mention but maybe you could clarify some things…

1. 6m seems a reasonable compromise given the lifespan of humans, the timescales of human deliberation and the commercial and military pressure to accelerate AI development. Short enough to motivate urgent action, but long enough that reflection is possible. (I doubt we actually pause, but I agree with the principle. China isn’t going to pause of course.)

2. Let’s assume GPT 5 with an array of NLP powered extensions exceeds the reasoning abilities of 95% of humanity in a wide variety of knowledge domains. That’s a shock on the scale of developing fire, but it’s occurring in a hugely complex and interdependent world that seems always on the edge of self-destruction and actually has the capabilities to end itself. We’re not hunter gatherers playing with fire or Mesopotomians developing writing. There’s no precedent for the speed, impact and civilizational fragility we face now.

3. It’s not relevant that people who signed this letter were previously skeptical of the progress towards AI. I recall 10 years ago you were skeptical. For my part I’ve been worried for a long time, but assumed it was going to come in 2080 or so. 60 years early is a reason to pause and understand what has happened.

Lastly, I read the OpenAI statement. That seems consistent with a pause.

Monday, February 20, 2023

Be afraid of ChatGPT

TL;DR: It's not that ChatGPT is miraculous, it's that cognitive science research suggests human cognition is also not miraculous.

"Those early airplanes were nothing compared to our pigeon-powered flight technology!"

https://chat.openai.com/chat - "Write a funny but profound sentence about what pigeons thought of early airplanes"

Relax

Be Afraid

ChatGPT is just a fancy autocomplete.

Much of human language generation may be a fancy autocomplete.

ChatGPT confabulates.

Humans with cognitive disabilities routinely confabulate and under enough stress most humans will confabulate. 

ChatGPT can’t do arithmetic.

IF a monitoring system can detect a question involves arithmetic or mathematics it can invoke a math system*.


UPDATE: 2 hours after writing this I read that this has been done.

ChatGPT’s knowledge base is faulty.

ChatGPT’s knowledge base is vastly larger than that of most humans and it will quickly improve.

ChatGPT doesn’t have explicit goals other than a design goal to emulate human interaction.

Other goals can be implemented.

We don’t know how to emulate the integration layer humans use to coordinate input from disparate neural networks and negotiate conflicts.

*I don't know the status of such an integration layer. It may already have been built. If not it may take years or decades -- but probably not many decades.

We can’t even get AI to drive a car, so we shouldn’t worry about this.

It’s likely that driving a car basically requires near-human cognitive abilities. The car test isn’t reassuring.

ChatGPT isn’t conscious.

Are you conscious? Tell me what consciousness is.

ChatGPT doesn’t have a soul.

Show me your soul.

Relax - I'm bad at predictions. In 1945 I would have said it was impossible, barring celestial intervention, for humanity to go 75 years without nuclear war.


See also:

  • All posts tagged as skynet
  • Scott Aaronson and the case against strong AI (2008). At that time Aaronson felt a sentient AI was sometime after 2100. Fifteen years later (Jan 2023) Scott is working for OpenAI (ChatGPT). Emphases mine: "I’m now working at one of the world’s leading AI companies ... that company has already created GPT, an AI with a good fraction of the fantastical verbal abilities shown by M3GAN in the movie ... that AI will gain many of the remaining abilities in years rather than decades, and .. my job this year—supposedly!—is to think about how to prevent this sort of AI from wreaking havoc on the world."
  • Imagining the Singularity - in 1965 (2009 post.  Mathematician I.J. Good warned of an "intelligence explosion" in 1965. "Irving John ("I.J."; "Jack") Good (9 December 1916 – 5 April 2009)[1][2] was a British statistician who worked as a cryptologist at Bletchley Park."
  • The Thoughtful Slime Mold (2008). We don't fly like bird's fly.
  • Fermi Paradox resolutions (2000)
  • Against superhuman AI: in 2019 I felt reassured.
  • Mass disability (2012) - what happens as more work is done best by non-humans. This post mentions Clark Goble, an app.net conservative I miss quite often. He died young.
  • Phishing with the post-Turing avatar (2010). I was thinking 2050 but now 2025 is more likely.
  • Rat brain flies plane (2004). I've often wondered what happened to that work.
  • Cat brain simulator (2009). "I used to say that the day we had a computer roughly as smart as a hamster would be a good day to take the family on the holiday you've always dreamed of."
  • Slouching towards Skynet (2007). Theories on the evolution of cognition often involve aspects of deception including detection and deceit.
  • IEEE Singularity Issue (2008). Widespread mockery of the Singularity idea followed.
  • Bill Joy - Why the Future Doesn't Need Us (2000). See also Wikipedia summary. I'd love to see him revisit this essay but, again, he was widely mocked.
  • Google AI in 2030? (2007) A 2007 prediction by Peter Norvig that we'd have strong AI around 2030. That ... is looking possible.
  • Google's IQ boost (2009) Not directly related to this topic but reassurance that I'm bad at prediction. Google went to shit after 2009.
  • Skynet cometh (2009). Humor.
  • Personal note - in 1979 or so John Hopfield excitedly described his work in neural networks to me. My memory is poor but I think we were outdoors at the Caltech campus. I have no recollection of why we were speaking, maybe I'd attended a talk of his. A few weeks later I incorporated his explanations into a Caltech class I taught to local high school students on Saturday mornings. Hopfield would be about 90 if he's still alive. If he's avoided dementia it would be interesting to ask him what he thinks.

Saturday, February 09, 2019

The curious psychiatric state of Robert F Kennedy Jr

Robert F Kennedy Jr showed up in a scrum of pro-measles whackos recently. It  me wonder how he got so nuts.

There’s an extensive wikipedia page for him, starting with a time I remember:

He was 9 years old when his uncle, President John F. Kennedy, was assassinated during a political trip to Dallas, and 14 years old when his father was assassinated…

Despite childhood tragedy he was a successful academic and he’s done some decent work legally and for the environment. He seems to have started off the rails in the 80s:

In 1983, at age 29, Kennedy was arrested in a Rapid City, South Dakota airport for heroin possession after a search of his carry-on bag uncovered the drug, following a near overdose in flight.

By 1989 he’d started on vaccines — but not with autism … 

His son Conor suffers from anaphylaxis peanut allergies. Kennedy wrote the foreword to The Peanut Allergy Epidemic, in which he and the authors link increasing food allergies in children to certain vaccines that were approved beginning in 1989

By 2000s he’d jumped from immunizations causing his son’s anaphylactic disorder to immunization causing autism. He became "chairman of “World Mercury Project” (WMP), an advocacy group that focuses on the perceived issue of mercury, in industry and medicine, especially the ethylmercury compound thimerosal in vaccines”. It was a downward spiral from there.

Despite his vaccine delusions and troubled marriages he seems to have maintained a fairly active wealthy person life. He’s said to be a good whitewater kayaker.

Psychiatrically it’s curious. He combines fixed irrational beliefs (the definition of delusions) with relatively high functioning in other domains. He reminds me of L Ron Hubbard, founder of Scientology

We need to keep him far from the political world.

Saturday, February 02, 2019

Against superhuman AI

I am a strong-AI pessimist. I think by 2100 we’ll be in range of sentient AIs that vastly exceed human cognitive abilities (“skynet”). Superhuman-AI has long been my favorite answer to the Fermi Paradox (see also); an inevitable product of all technological civilizations that ends interest in touring the galaxy.

I periodically read essays claiming superhuman-AI is silly, but the justifications are typically nonsensical or theological (soul-equivalents needed).

So I tried to come up with some valid reasons to be reassured. Here’s my list:

  1. We’ve hit the physical limits of our processing architecture. The “Moore-era” is over — no more doubling every 12-18 months. Now we slowly add cores and tweak hardware. The new MacBook Air isn’t much faster than my 2015 Air. So the raw power driver isn’t there.
  2. Our processing architecture is energy inefficient. Human brains vastly exceed our computing capabilities and they run on a meager supply of glucose and oxygen. Our energy-output curve is wrong.
  3. Autonomous vehicles are stuck. They aren’t even as good as the average human driver, and the average human driver is obviously incompetent. They can’t handle bicycles, pedestrians, weather, or map variations. They could be 20 years away, they could be 100 years away. They aren’t 5 years away. Our algorithms are limited.
  4. Quantum computers aren’t that exciting. They are wonderful physics platforms, but quantum supremacy may be quite narrow.
  5. Remember when organic neural networks were going to be fused into silicon platforms? Obviously that went nowhere since we no longer hear about it. (I checked, it appears Thomas DeMarse is still with us. Apparently.)

My list doesn’t make superhuman-AI impossible of course, it just means we might be a bit further away, closer to 300 years than 80 years. Long enough that my children might escape.

Tuesday, March 21, 2017

Broken world: applying for a minimum wage job via a corporate HR web site

My #1 son is a special needs adult. He’s excited to start at $10/hour job running food around a sports stadium. It’s work he can do — he’s got a great sense of direction and he is reasonably fit.

The job engagement process is run by an archaic corporate web site that looks like it was built for IE 3. The site claims to support Safari but warns against Chrome. It is not useable on a smartphone.

The HR process requires managing user credentials, navigating a complex 1990s style user interface, and working around errors made by the HR staff — who probably also struggle with the software. He would not have the proverbial snowball’s chance without my ability to assume his digital identity.

Sure, #1 is below the 5th percentile on standard cognition tests — but this would have been a challenge to the 15th percentile back in the 90s. In the modern era, where most non-college young people are primarily familiar with smartphones, this is a challenge to the 30th percentile.

Which means the people might want to do this job are being shut out by the HR software created to support the job. Which probably has something to do with this.

The world is broken.

#massdisability

Saturday, December 31, 2016

Crisis-T: blame it on the iPhone (too)

It’s a human thing. Something insane happens and we try to figure out “why now?”. We did a lot of that in the fall of 2001. Today I looked back at some of what I wrote then. It’s somewhat unhinged — most of us were a bit nuts then. Most of what I wrote is best forgotten, but I still have a soft spot for this Nov 2001 diagram …

Model 20010911

I think some of it works for Nov 2016 too, particularly the belief/fact breakdown, the relative poverty, the cultural dislocation, the response to modernity and changing roles of women, and the role of communication technology. Demographic pressure and environmental degradation aren’t factors in Crisis-T though.

More than those common factors I’ve blamed Crisis-T on automation and globalization reducing the demand for non-elite labor (aka “mass disability”). That doesn’t account for the Russian infowar and fake news factors though (“Meme belief=facts” and “communications tech” in my old diagram). Why were they so apparently influential? 

Maybe we should blame the iPhone …

Why Trolls Won in 2016 Bryan Mengus, Gizmodo

… Edgar Welch, armed with multiple weapons, entered a DC pizzeria and fired, seeking to “investigate” the pizza gate conspiracy—the debunked theory that John Podesta and Hillary Clinton are the architects of a child sex-trafficking ring covertly headquartered in the nonexistent basement of the restaurant Comet Ping Pong. Egged on by conspiracy videos hosted on YouTube, and disinformation posted broadly across internet communities and social networks, Welch made the 350-mile drive filled with righteous purpose. A brief interview with the New York Times revealed that the shooter had only recently had internet installed in his home….

…. the earliest public incarnation of the internet—USENET—was populated mostly by academia. It also had little to no moderation. Each September, new college students would get easy access to the network, leading to an uptick in low-value posts which would taper off as the newbies got a sense for the culture of USENET’s various newsgroups. 1993 is immortalized as the Eternal September when AOL began to offer USENET to a flood of brand-new internet users, and overwhelmed by those who could finally afford access, that original USENET culture never bounced back.

Similarly, when Facebook was first founded in 2004, it was only available to Harvard students … The trend has remained fairly consistent: the wealthy, urban, and highly-educated are the first to benefit from and use new technologies while the poor, rural, and less educated lag behind. That margin has shrunk drastically since 2004, as cheaper computers and broadband access became attainable for most Americans.

…  the vast majority of internet users today do not come from the elite set. According to Pew Research, 63 percent of adults in the US used the internet in 2004. By 2015 that number had skyrocketed to 84 percent. Among the study’s conclusions were that, “the most pronounced growth has come among those in lower-income households and those with lower levels of educational attainment” …

… What we’re experiencing now is a huge influx of relatively new internet users—USENET’s Eternal September on an enormous scale—wrapped in political unrest.

“White Low-Income Non-College” (WLINC) and “non-elite” are politically correct [1] ways of speaking about the 40% of white Americans who have IQ scores below 100. It’s a population that was protected from net exposure until Apple introduced the first mass market computing device in June of 2007 — and Google and Facebook made mass market computing inexpensive and irresistible.

And so it has come to pass that in 2016 a population vulnerable to manipulation and yearning for the comfort of the mass movement has been dispossessed by technological change and empowered by the Facebook ad-funded manipulation engine.

So we can blame the iPhone too.

- fn -

[1] I think, for once, the term actually applies.

Saturday, November 26, 2016

Peak Human and Mass Disability are the same thing

For reference - DeLong’s Peak Human and my Mass Disability are synonyms. Both refer to a surplus of productive capacity relative to labor supply, particularly the supply of non-elite cognitive labor.

I like the term ‘mass disability’ because we have a long history of supported labor for people we have traditionally called ‘cognitively disabled’.

Ok, that’s not the whole story.

I also like the term because I have a personal agenda to support persons with traditional cognitive disabilities. Using the term ‘disability’ forces us to think about how individual features become abilities or disabilities depending on the environment — something Darwin understood. Addressing the needs of the majority of human beings can also help the most disadvantaged.

Wednesday, November 16, 2016

Mass Disability - how did I come up with 40%?

How, a friend asked, did I come up with the 40% number for “mass disability” that I quoted in After Trump: reflections on mass disability in a sleepless night?

I came up with that number thinking about the relationship of college education, IQ curves, and middle class status. The thesis goes like this…

  1. Disability is contextual. In a space ship legs are a bit of a nuisance, but on earth they are quite helpful. The context for disability in the modern world is not climbing trees or lifting weights, it’s being able to earn an income that buys food, shelter, education, health care, recreation and a relatively secure old age. That is the definition of the modern “middle class” and above; a household income from $42,000 ($20/hr) to $126,000. It’s about half of Americans. By definition then half of Americans are not “abled”.
  2. I get a similar percentage if I look at the percentage of Americans who can complete a college degree or comparable advanced skills training. That’s a good proxy for reasonable emotional control and an IQ to at least 105 to 110. That’s about 40% of Americans — but Canada does better. I think the upper limit is probably 50% of people. If you accept that a college-capable brain is necessary for relative economic success in the modern world then 50% of Americans will be disabled.

So I could say that the real number is 50%, but college students mess up the income numbers. The 40% estimate for functionally disabled Americans adjusts for that.

As our non-sentient AI tech and automation gets smarter the “ability” threshold is going to rise. Somewhere the system has to break down. I think it broke on Nov 8, 2016. In a sense democracy worked — our cities aren’t literally on fire. Yet.

Sunday, October 16, 2016

How to give believers an exit from a cause gone bad

How do you give someone who has committed themselves to a bad cause a way out? You don’t do it by beating on how stupid they are …

From How to Build an Exit Ramp for Trump Supporters (Deepak Malhotra)

  1. Don’t force them to defend their beliefs … you will be much more effective if you encourage people to reconsider their perspective without saying that this requires them to adopt yours.
  2. Provide information, and then give them time … change doesn’t tend to happen during a heated argument.  It doesn’t happen immediately.
  3. Don’t fight bias with bias … the one thing you can’t afford to lose if you want to one day change their mind: their belief about your integrity.  They will not acknowledge or thank you for your even-handedness at the time they’re arguing with you, but they will remember and appreciate it later, behind closed doors.  And that’s where change happens.
  4. Don’t force them to choose between their idea and yours. … you will be much more effective if you encourage people to reconsider their perspective without saying that this requires them to adopt yours.  
  5. Help them save face…. have we made it safe for them to change course?  How will they change their mind without looking like they have been foolish or naïve?  
  6. Give them the cover they need. Often what’s required is some change in the situation—however small or symbolic—that allows them to say, “That’s why I changed my mind.” … For most people, these events are just “one more thing” that happened, but don’t underestimate the powerful role they can play in helping people who, while finally mentally ready to change their position, are worried about how to take the last, decisive step.
  7. Let them in. If they fear you will punish them the moment they change their mind, they will stick to their guns until the bitter end.  This punishment takes many forms, from taunts of “I told you so” to being labeled “a flip-flopper” to still being treated like an outsider or lesser member of the team by those who were “on the right side all along.” This is a grave mistake.  If you want someone to stop clinging to a failing course of action or a bad idea, you will do yourself a huge favor if you reward rather than punish them for admitting they were wrong…You have to let them in and give them the respect they want and need just as much as you.

If you’re a Vikings fan feuding with your brother-in-law from Green Bay feel free the break all these rules. If you’re worried about the future of civilization you might try this instead.

For #5, saving face, look for something they could have been right about. To a climate changer denier, agree that solar output varies. To a Trump follower, agree that the bleak future of the non-college adult wouldn’t have gotten attention without his focus.

I’m adding this recipe to the Notes collection I carry on my phone.

Thursday, November 19, 2015

Randall Munroe introduces world language and Google Translate training program using charming New Yorker article

XKCD’s Randall Munroe, the notorious interstellar sAI, has published a simplified vocabulary explanation of Special and General Relativity in the New Yorker.

This work is presumably taken from his almost released new book, Thing Explainer ($15 Amazon pre-order). The essay is entertaining and educational; it also promotes his new book and shows he is a smart pants man.

But that’s not the real reason he’s written this. Obviously his true agenda is to create an English dialect of a universal human language with a simplified vocabulary and grammar that is ideally suited to machine translation and, eventually, colloquial conversations with terrestrial AIs (contra the Wolfram Language for AI conversation, see also Marain. Siri-speak 2015 is a crude version of this.)

Let’s see how well his first version works, using the nsAI Google Translate to do round trip translations of a sample paragraph from the original muEnglish to another language and then back again. We’ll start with French, a language related to that of England’s 11th century conquerors, then we’ll do Chinese. I know from past experiments that round-trip translations from English to Chinese and back typically produce incomprehensible gibberish:

Munroe original (muEnglish)

The first idea is called the special idea, because it covers only a few special parts of space and time. The other one—the big idea—covers all the stuff that is left out by the special idea. The big idea is a lot harder to understand than the special one. People who are good at numbers can use the special idea to answer questions pretty easily, but you have to know a lot about numbers to do anything with the big idea. To understand the big idea—the hard one—it helps to understand the special idea first.

French version

La première idée est appelé l'idée particulière, car elle ne couvre que quelques pièces spéciales de l'espace et du temps. Celui-la grande idée-couvre l'autre tous les trucs qui est laissé par l'idée particulière. La grande idée est beaucoup plus difficile à comprendre que le spécial. Les gens qui sont bons à numéros peuvent utiliser l'idée spéciale pour répondre à des questions assez facilement, mais vous devez savoir beaucoup sur les numéros de faire quelque chose avec la grande idée. Pour comprendre la grande idée-le dur-elle aide à comprendre l'idée première spéciale.

French to English

The first idea is called the particular idea because it covers only a few special pieces of space and time. This great idea covers the other all the stuff that is left by the particular idea. The big idea is much harder to understand than the special. People who are good at numbers can use special idea to answer questions fairly easily, but you should know a lot about the numbers to do something with the big idea. To understand the great idea - hard - it helps to understand the first special idea.

Chinese

第一个想法就是所谓的特殊的想法,因为它涵盖的空间和时间只有几个特殊部位。另外一个大的想法,涵盖了所有剩下的由特殊的想法的东西。大的想法是很多更难理解比特殊的一个。人们谁是善于号码可以使用特殊的想法很容易回答的问题,但是你要知道很多关于数字做的大创意什么。为了解大的想法,硬一它有助于先了解特殊的想法

Chinese to English

The first idea is the idea of so-called special because the space and time it covers only a few special parts. Another big idea, covering all rest of the stuff from the special idea. Big idea is a lot more difficult to understand than the special one. People who are good at numbers you can use special idea is very easy question to answer, but you know a lot about what the figures do big ideas. To understand the big idea, hard and it helps to understand the idea of a special.

Munroe English (muEnglish) works rather well between French and English. If you’re interested in learning French, you might enjoy reading a future French version of Thing Explainer or simply run the English version through Google Translate (and use speech recognition for verbal work).

The Chinese round-trip experiment almost works, but falls apart grammatically. For example, “you can use special idea is very easy question to answer, but you know a lot about what the figures do big ideas” is missing things like “need” and “to” and a few pronouns. There’s also an unfortunate “numbers” to “figures” word substitution. Given that Munroe is a far more advanced AI than Google this essay will be used to enhance Google’s Chinese translation model (which desperately needs work).

I’m optimistic about this new language and happy that the Munroe is now taking a more active hand in guiding human development. Zorgon knows we need the help.

Update 11/19/2015: There’s a flaw in my logic.

Alas, I didn’t think this through. There’s a reason speech recognition and natural language processing work better with longer, more technical words. It’s because short English words are often homonyms; they have multiple meanings and so can only be understood in context [1]. Big, for example, can refer to size or importance. In order to get under 1000 words Munroe uses many context tricks, including colloquialisms like “good at numbers” (meaning “good at mathematics”). His 1000 word “simple” vocabulary just pushes the meaning problem from words into context and grammar — a much harder challenge for translation than mere vocabulary.

So this essay might be a Google Translate training tool — but it’s no surprise it doesn’t serve the round-trip to Chinese. It is a hard translation challenge, not an easy one.

[1] Scientology’s L Ron Hubbard had a deep loathing for words with multiple or unclear meanings, presumably including homonyms. He banned them from Scientology grade school education. Ironically this is hard to Google because so many people confuse “ad hominem attack” with homonym.

Thursday, October 29, 2015

Learning from an Amazon "Newer Galaxy" fraud: I too am prey.

I’ve been digging into thunderbolt 2 lately. It’s an orphan technology — sure looks like Apple has given up on it. In retrospect either Apple or Intel needed to make their own hubs — in a low-trust world leaving this to dying 3rd party manufacturers was a mistake.

For now I’ve settled on the OWC Thunderbolt 2 dock. It’s not perfect, I still have suspicions about how it performs under load. I wouldn’t be surprised if I need to power cycle it every few days. Yeah, like I said, Apple needed to make this. I tested it next to an Elgato hub with similar USB 3 performance, the deciding feature was support for legacy firewire 800.

During the testing period I used a (too) short thunderbolt cable bundled with the Elgato, but that’s going back with the return. Due to a misunderstanding about Apple cable prices I decided to get a OWC 2m cable, but in a moment of weakness I ordered it from Amazon (Prime shipping, speed, etc).

That is, I ordered from an Amazon page that said OWC cable on it, via “Newer Galaxy Distribution Company”. The page looked like this:

OWC cable

Yeah, look closely, It says made by OWC and the image has OWC on it, but the page title doesn’t actually say OWC. On the other hand, the text says:

Utilizes the latest Thunderbolt chipset for high-speed 10Gb/s Thunderbolt and 20Gb/s Thunderbolt 2 devices
Enhance video workflows with support for faster 4K video transfers + 4K display capabilities via DisplayPort 1.2
1 Year OWC Limited Warranty

So I was stupid, yes, but I wasn’t completely misguided. I even inspected “Newer Galaxy”’s sales count and ratings — though I know ratings systems of this sort are almost completely fake.

Damn. I know better than this. Yes, it was Amazon Prime, but that only means the returns are easier. It doesn’t mean it’s legitimate.

This is what’s being shipped:

Shipped cable

A “2M” cable. It’s not actually a counterfeit cable at this point, it’s just not what I ordered.

There’s an upside to this experience. I can share it here for one, and every story like this is a small push for Amazon reform. Amazon returns are very easy, and for frauds like this there’s no return postage fee. (I’ll reference this blog post in the return comments.)

For another, I’ve also learned that I’m not as good at spotting fraud as I should be — I blame that on age. The data is clear that most of us become prey after age 55 or so. Prey have to learn fear, and I’m learning.

Best of all I learned that Apple has dropped its price on 2m thunderbolt cables from $60 to $40 (that price drop is probably why trustworthy alternatives have disappeared). So I’ll do that instead.

It would be good to have a trustworthy alternative to Amazon… 

Sunday, January 04, 2015

Saturday, May 03, 2014

Thinking tools 2014 - holding steady but future unclear

Revisiting something I wrote 14 years ago reminded me of the tools I use to think about the world. Once those tools were conversation, paper diaries and notebooks — even letters. Later came email, local BBS, FidoNet [1] and Usenet [3]. In the 90s we created web pages with tools like FrontPage and “personal web servers” [2] — even precursors to what became blogs.

In the 00s we had the Golden Age of Google. My thinking tools were made by Google — Google Blogger, Google Custom Search Engine, Google Reader (RSS/Atom) and Google Reader Social. We loved Google then — before the fall.

From 1965 through 20011 my thinking tools continuously improved. Then things got rocky.

These days I still use Blogger [4]. Blogger is old but seems to be maintained, unlike Google Custom Search. I’m grateful that Daniel Jakut continues to update MarsEdit — I wish he’d use Backer to charge me some money. There are features I’d like, but most of all I’d like him to continue support.

I still rely on RSS, even as it fades from memory (but even new journalism ventures like Upshot still have feeds). Feedbin (20$/yr) is almost as good as Google Reader [6], Reeder.app is still around (but unstable), and Pinboard ($10 lifetime) has turned out to be a “good enough” de facto microblogging platform — with a bit of help from IFTTT (0$) [5].

App.net Alpha ($36/year!) [7] powered by PourOver and consumed in part through Duerig Root-Feeds has filled out the rest of the microblogging role — and replaced the intellectual feedback of Reader Social.

So as of 2014 I’ve cobbled together a set of thinking tools that are comparable to what I had in 2009. It feels shaky though. Few people under 30 know what RSS is, app.net is not growing (even Twitter is dying), and I’ve recently written about the decrepit state of Google Custom Search. Of Google’s twitter-clone, the less said the better.

I wonder what comes next? I don’t see anything yet. I’m reminded of the long fallow time between the end of Palm @2003 and the (useful) iPhone of 2009 (transition hurt). Expect turbulence.

—fn— 

[1] FidoNews was last published July 1999.

[2] FrontPage 98 was a prosumer tool; the closest equivalent today would be MarsEdit or Microsoft’s forgotten Live Writer (2009).

[3] I used to tag Usenet posts with a unique string, then search for them in DejaNews and later Google Groups. So a bit of a micro-blog.

[4] I do use WordPress on Dreamhost for my share archive.

[5] Pinboard is about $10 for lifetime use. That’s so low it worries me. There’s a $25/yr option for a full text archive for every bookmark, but I don’t need that; it would just confuse my searches. Maybe Maciej should seek Backer funding for new features?

[6] Speaking of Backer funding, I’d fund a feature that gave me in-context editing of Feedbin feed titles.

[7] App.net is by far the most expensive of the services I use, but if you visit the site the yearly subscription fee is undiscoverable. You only see the free signup, without mention of follower limitations. This bothers me

See also

Saturday, April 19, 2014

The Einstellung effect: simple truths we cannot see.

Epistemic closure (in political thought). Confirmation bias [6]. Availability heuristic (Kahneman System 1). Premature cognitive commitment. Even, perhaps [5], delusion. These are all forms of cognitive bias [1].

They drive me nuts [7]. Not because I have a problem with the concept of cognitive bias, but because I always know I’m missing something obvious.

It’s just out there. A better solution to a problem, something I’m doing wrong and can’t see it, a problem I don’t even know I have. Something in my blind spot that’s closing fast. An opportunity, a threat an ….. argggggggghhh!

Ok, I’m back. Do you know how hard it is to find a paper bag in 2014?

Cognitive bias is why, more than most people I know, I’m always seeking criticism. That includes anybody, often not a friend [2], who is happy to tell me why I’m an idiot. Every so often see what I missed, and the joy of that correction more than compensates for minor tweaks of my thick skin.

So I’m happy to point to a new entry in the ‘what am I missing’ category — the Einstellung Effect. This is best described in a SciAm article that’s available from the 1st author’s web site (pdf, see also 2008 academic pub). Bilalić and McLeod’s work adds neurophysiology to one version of premature cognitive closure; a tantalizing connection given how much we seem to think with our bodies [3].

Their recent research has explored cognitive error in expert chess players whose very expertise leads them to errors more naive players would avoid. They seem to have adopted their visual cortex to solve certain thinking problems [4], and thus to be afflicted by the visual processing adaptations that evolved for the physical world (emphases mine)…

Why Good Thoughts Block Better Ones, Bilalić and McLeod, SciAm March 2014

…  Building on Luchins’s early work, psychologists replicated the Einstellung effect in many different laboratory studies with both novices and experts exercising a range of mental abilities, but exactly how and why it happened was never clear. Recently, by recording the eye movements of highly skilled chess players, we have solved the mystery. It turns out that people under the influence of this cognitive shortcut are literally blind to certain details in their environment that could provide them with a more effective solution. New research also suggests that many different cognitive biases discovered by psychologists over the years—those in the courtroom and the hospital, for instance— are in fact variations of the Einstellung effect.

… the mere possibility of the smothered mate move was stubbornly masking alternative solutions… infrared camera revealed that even when the players said they were looking for a faster solution—and indeed believed they were doing so—they did not actually shift their gaze away from the squares they had already identified as part of the smothered mate move.

I think of Delusion as an extreme manifestation of the Einstellung effect. Given our emerging understanding of autism and schizophrenia as similar manifestations of a neural network injury, I wonder if we’ll find connections between delusional beliefs and visual networks…

- fn -

[1] I love Wikipedia’s “List” articles; I suspect Google’s Knowledge Graph loves ‘em too. See also Wikipedia’s recursive Lists of Lists of Lists

[2] My favorite corrector is an app.net correspondent who I don’t know enough to claim as a friend, but who is a wonderfully cordial correspondent. That’s the best of all.

[3] An extraordinarily brilliant college roommate, who was later disabled by a schizophrenia like disorder, first suggested this to me in 1981. So the modern literature did not surprise me. Incidentally, he subsequently acquired a PhD and joined a NASA research facility. He found a way around his disorder.

[4] No, I cannot resist thinking of using GPUs to solve parallelism problems faster than CPUs.

[5] I personally think of delusion as an extreme form of cognitive closure; and I think it’s far more common than the psychotic disorders. An area ripe for research.

[6] Via the Einstellung article, a fantastic quote from one Francis Bacon’s 1620 Novum Organum:

“The human understanding when it has once adopted an opinion . . . draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside and rejects.... Men ... mark the events where they are fulfilled, but where they fail, though this happen much oftener, neglect and pass them by. But with far more subtlety does this mischief insinuate itself into philosophy and the sciences, in which the first conclusion colours and brings into conformity with itself all that comes after.”

Bacon always amazes. I’d declare him “father of cognitive science” for this quote alone. “Epistemic closure” is not new …

[7] This isn’t a new obsession. As a first year med student @1982 I devoured a 1970s text on clinical diagnosis that listed common cognitive errors, beginning with confirmation bias. This book has been rewritten many times since, alas the title and author are lost to my memory.

Saturday, March 15, 2014

Late revelation -- Doom of the Face

I have the face of a Disney villain.

This came to me as a slowly unfolding personal revelation after reading Emily Matchar’s humorous essay on “Bitchy Resting Face” …

Memoirs of an Un-Smiling Woman - Emily Matchar June 2013

… I struggle with what comedic YouTube-ers Broken People recently termed “Bitchy Resting Face" (hereafter known as BRF). Their PSA-style video introduces us to the plight of women who look sad or pissed off for no reason. Women whose boyfriends always ask them "what's wrong?" Women whose apparent unfriendliness earns raised eyebrows from store clerks. Women who just look, well, bitchy. Even though they’re not…

… My eyes, naturally almond-shaped, can look as if I'm narrowing them in suspicion. My mouth, when not actively smiling, settles into a rather grim line…

… At one of my first jobs, a more senior co-worker pulled me aside to ask why I looked so unhappy. "If you're having an issue, this office is a safe space for you to talk," he said.

I wasn't having an issue. I was just thinking about getting a cup of coffee…

… BRF, I've discovered, has its advantages. I've traveled the world solo, and very rarely been bothered. While female friends with more friendly, open faces report the standard street harassment - cat calls, men badgering them for dates, butt pinching - I float along in my own bitch-face bubble…

… I live in Hong Kong, one of the densest cities on earth, where turning your face into a blank mask is simply a tool of urban survival…

The first person I thought of while reading Emily’s story was a female friend and colleague who I’d once thought of as unhappy and disapproving. When she smiled it was a great pleasure; which is probably why her friends and colleagues often looked for ways to make her smile. Because, despite the first impressions, she was and is a kind, thoughtful and compassionate person.

Orwell and Lincoln were wrong, we don’t get the faces we deserve — at least not entirely. But I’ll get to that part.

The second person I thought of was me, and over the course of a few weeks I enjoyed the agreeable experience of having another piece of the puzzle of mortal existence fall into place. Of course this was not entirely good news, and it would have been better to have figured this out twenty years ago, but solving the puzzle of life is a hobby of mine. After 50 new discoveries are rare, so I particularly appreciated this one.

Of course I’m a guy, so I can’t call it BRF. I’ll have to call it VRF - for villainous resting face (ARF is not quite right - I think I look stern, harsh and disapproving rather than angry). Close set narrow and sunken eyes, small mouth and weak chin, post-CrossFit lean and hungry … yeah, kinda scary. Villainous. No wonder airport security always looks twice.

I wasn’t always this way. As a young adult I was a magnet for cult recruiters — innocent and gullible (though I was neither - faces mislead). Now, though I’m less harsh than my childhood self, no cultist would give me a second look. Over the years photos show my face changing, much as the NYT described.

Faces, as we know, bring a certain kind of destiny. Many a (sometimes disastrous) political career has been made by a strong jaw. There are few lean, beaky and weak jawed faces running publicly traded corporations or nations (Tyler?). So there’s something to be said for knowing one’s face — denial has its advantages, but I prefer to see things as they are.

Of course “seeing things as they are” is the kind of thing we villains do. We make the hard choices others avoid, walk the shadows that must be walked, accept the responsibility for the greater good, grasp the … 

Hmm. Maybe I do deserve this face. Truth to tell, I do have some villainous henchman potential, and the usual weathered and worn experiences.

Deserved or not, we must either adjust to our faces or get plastic surgery (My Emily would laugh at that one — and then have me committed). Emily Matchar moved to Hong Kong, where her face worked for her. In my case there’s something to be said for teleconferences and working remotely. I do better as the Vizier and Henchman in the corporate shadows than as the face of the company. If I go the entrepreneurial route I would need a money-raising partner or avoid VCs and banks. When I lead teams I have to opt for “stern but fair” rather than “noble and true”. When I talk I have to find ways to laugh or smile — hopefully without the maniacal bit. That’s especially true with my kids — they tell me my “mildly disapproving look” is the glare of doom. 

If I have to find a job … well, the interview is a bit of an uphill battle. Not quite sure what to do about that.

On the bright side, solicitors leave my doorstep quickly.