Showing posts with label brain and mind. Show all posts
Showing posts with label brain and mind. Show all posts

Sunday, March 09, 2014

I need a word for the willful failure of reason

Kahneman's Thinking Fast and Slow is a brain dump of his research career and speculations about mind. The Wikipedia description isn't bad ...

... dichotomy between two modes of thought : System 1 is fast, instinctive and emotional; System 2 is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking, starting with Kahneman's own research on loss aversion. From framing choices to substitution, the book highlights several decades of academic research to suggest that people place too much confidence in human judgment...

The thing I like about Kahneman's framework is that it explains how smart people can do "dumb' things. System 1, for example, correlates well with IQ test results; people with a strong System 1 answer quickly and correctly. System 1 is snappy, fun, and easy. It's snap judgment and gut instinct.

System 2 though, System 2 is hard work. It takes time to train it. System 2 is graphics software without hardware acceleration; it's no fun.

Both Systems can give correct answers, but while Kahneman recognizes the power of System 1, his true love is the plodding logic of System 2. One of the more interesting chapters of the book is a rough heuristic for making predictions using System 1 instinct adjusted by Systems 2 logic.

For what it's worth, I think of myself as having a reasonable System 1, but a really good System 2.

Which brings me to the word I want; a word for people who have a strong System 1 but a weak System 2. I want a word for the Paul Ryans and GWBs of the world.

The word isn't "dumb" or "stupid". It's a word for a character flaw rather than a cognitive limitation, a word for someone who has the power to reason well but chooses not to practice it. It's a word for willful intellectual laziness.

Anyone have a good word? A knowledge of Latin might help...

How could we create an evidence-based classification of disorders of the mind?

The software/hardware metaphor is usually considered as misleading as every other model of mind we've come up with.

I don't agree. My guess is it's an unusually good model -- one rooted in the physics of computation. Anything sufficiently complex can compute, which is, souls aside, the same as running a mind...

... in an alternative abstract universe closely related to the one described by the Navier-Stokes equations, it is possible for a body of fluid to form a sort of computer, which can build a self-replicating fluid robot ...

... A central insight of computer science is that, whenever a physical phenomenon is complex enough, it should be possible to use it to build a universal computer ...

Our minds have emerged to run on our desperately hacked and half-broken brains - in hundreds if not tens of thousands of years. In evolutionary terms that's insanely fast (and did it really never happen before?). Minds route around damage and adapt, as much as they can, to both adolescent transformation and adult senescence; they run and run until they slowly fade like a degraded hologram. It's no wonder minds are so diverse.

When that diversity intersects with the peculiar demands of our technocentric world we get "Traits that Reduce Relative Economic Productivity" -- and we get poverty and suffering. We get disease, and so we need names.

We need names because our minds can't reason with pure patterns -- we're not that smart. With names we can do studies, make predictions, select and test treatments.

Names are treacherous though. Once our minds create a category, it frames  our thinking. We choose a path, and it becomes the only path. It might be a good path for a time, but eventually we have to start over. Over the past ten years researchers and psychiatrists have realized that our old "DSM" categories are obsolete.

So how could we start over? One approach, informed by the history of early 20th century medicine, is to classify disorders by underlying physiology. That's where terms like 'connectopathy' come from, and why we try to define mind disorders by gene patterns.

We need to do that, but lately I've wondered if it's the wrong direction. If minds really are somewhat independent of the substrate brain, then we may find that disorders of the substrate only loosely predict the outcomes of the mind. Very similar physiological disorders, for example, might produce disabling delusions in one mind and mere idiosyncrasies in another.

So maybe we need another way to attach labels to patterns of mind. One way to do this would be to create a catalogue of testable traits for things like belief-persistence, anxiety-response, digit-span, trauma-persistence, novelty-seeking, obsessiveness, pattern-formation and the like. My guess is that we could identify 25-50 that would span traits that are currently loosely associated with both normal variation and TRREPs like low IQ, schizophrenia, and autism. Run those tests a range of humanity, then do cluster analysis and name the clusters.

Then start from there.

 See also:

Thursday, November 14, 2013

Human domestication, the evolution of beauty, and your wisdom tooth extraction

My 16yo is having his wisdom teeth removed tomorrow. Blame it on human domestication.

The Economist explains the process. Domestication, whether it occurs in humans, foxes, or wolves, involves changes to "estradiol and neurotransmitters such as serotonin" (for example). These changes make humans less violent and better care givers and partners -- major survival advantages for a social animal. They also have unexpected side-effects, like shortened muzzles and flattened faces for wolves, foxes, (cows?) and humans.

Since domesticated humans out-compete undomesticated humans, the physiologic markers of domestication become selected for. They being to appear beautiful. Sex selection reinforces the domestication process.

It seems to be ongoing ...

 The evolution of beauty: Face the facts | The Economist:

... People also seem to be more beautiful now than they were in the past—precisely as would be expected if beauty is still evolving...

Which may be why we are becoming less violent.

Of course a shortened muzzle and smaller mandible have side-effects. Teeth in rapidly domesticating animals don't have room to move. Which is good news for orthodontists, and bad news for wisdom teeth.

See also:

Saturday, October 05, 2013

For American adults are poverty and disability the same thing?

[Preface 9/6/13: I am enjoying the app.net discussion thread on this with @duerig and @clarkgoble. When reading this, try substituting TRREP - Trait that Reduces Relative Economic Productivity for the word "disability". Also, please note disability is not inability. In my experience parenting/coaching two children with disabilities I think of managing disability like building a railroad across mountainous terrain. Sometimes reinforce, somethings divert, always forward.]

-- 

Anosmia is not a disability.

Well, technically, it is. Humans are supposed to come with a sense of smell. For most of human existence anosmia was a significant survival problem. At the least, it helps to known when food has gone bad. So Anosmia is a biological disability.

In today's America though, there's not much obvious economic downside to anosmia. Diminished appetite is more of a feature than a defect. There are many jobs where a keen sense of smell is a disadvantage -- including, I can assure you, medical practice. Anosmia is a biological disability, but it's not an economic disability. Not here and now anyway -- once it would have been.

Disability is contextual, it's the combination of variation, environment and measured outcome that defines disability.

What about if I lose my right leg? Am I disabled then? Well, if I delivered mail I'd have a problem -- but in my job an insurance company would snort milk out its proverbial nose if I tried to claim longterm disability.

I think you can see where I'm going with this. Stephen Hawking is an extreme example -- you can have a lot of physical disability and not be economically disabled.

So how can I become disabled? 

Probably not through my "risky" CrossFit hobby,  but my benign bicycle commute is another matter. Until that glorious day when humans are no longer allowed to drive cars, I'm at risk of a catastrophic head injury. An injury that may impact my cognitive processing, my disposition to use cognition ("rationality"), my judgment and temperament -- and leave me as completely disabled for high income work as if I were 85 [1]. At that point, barring insurance, I'm economically disabled and impoverished.

Clearly, acquired cognitive injury can be disabling. So what of congenital cognitive disorders like low functioning autism or severe impulse disorders? Impulsivity, inability to plan, very low IQ ... Clearly disabling. Without income support from family or government, extreme poverty is likely.

Ahh, but what of those born with average IQ, average rationality, average judgment, average temperament? Employment is likely -- but earnings will be limited. To be average in the economy of 2013 is to sit on the borderline of poverty -- and of disability. The difference will be decided by other factors, factors like race, location, and family wealth. An average person who looks and acts "white" and is born to a middle class family in Minnesota may make it into the dwindling middle class (for a time), an average person who looks and acts "black" and is born to a poor family in Mississippi is going to be impoverished.

Which brings me to my question - for American adults are poverty and disability the same thing? Not entirely -- race, residence, and family income have an impact, particularly within some ill-defined "middle range" of "native disability". Not entirely -- but they are clearly related. 

How related? Consider this OECD graph of poverty rates across nations with very different cultures, attitudes and histories:

 Pov taxtrans

Across Finland, Denmark, Sweden and the US we see a "natural" or baseline (pre-transfer) adult poverty rate of 24-32%, with Swede and the US both at 28%. Not coincidentally, 30% is what I suspect our baseline rate of mass disability is today.

We can and should deal with poverty-enhancing factors like racism, unfunded schools and the like. That will make a difference for many -- and, if all goes well, we might get the US baseline poverty rate to be more like Denmark's. We'll go from 28% to 24%. Ok, maybe, in a perfect world, we get our baseline rate from 28% to 20%. Maybe.

To really deal with poverty though, we need to understand what real disability is. Economic disability in 2013 isn't a missing leg, it's poor judgment, weak rationality, low IQ, disposition to substance abuse. To conquer poverty, we will need to conquer disability - either with Danish style income transfers or with something better.

I think we can do better.

- fn -

[1] Social security is simply a form of insurance for age-related disability with an arbitrary (but pragmatic) substitution of chronology for disability measurement.

See also

 

Thursday, September 26, 2013

Project Memfail: Tackling my search space problem

I've hit the Wall.

It's partly entropy-related wetware failure, but I'd be in trouble even if I were immortal. I have two many search-spaces and information stores in too many places.

Things aren't so bad in my personal domain - I have two search spaces. My Simplenote files (via NvAlt), Email (Mail.app) and Google Docs (via Google Drive and CloudPull [1])  are mirrored back to my Spotlight search space, and I use a Google custom search engine against my blogs, archived web site and app.net streams. So my two personal search spaces are private/secure and public. I can manage that [2], and I'm careful not to add anything that would require a third search space.

In my work domain though it's a mess. I have information scattered across several Wikis, multiple document stores, my local file system and multi-GB email store, and the remnants of blog whose server died. All of this in an environment that, for multiple reasons, is driving to an information half-life of 1 year. I have too many search spaces; I can no longer track them all.

So I'm launching Project Memfail :-). I need to rescope my search spaces - esp. my corporate one.

- fn -

[1] My main information loss over the past decade was GR Shares, I have some recovery there via CloudPull.
[2] Native spotlight search has issues, but I can work around those. Of course when Google Custom Search dies I'll be in for a painful transition.

Saturday, March 09, 2013

Strange loops - five years of wondering why our corporate units couldn't cooperate.

Five years ago I tried to figure out why we couldn't share work across our corporate units.

This turned out to be one of those rabbit hole questions. The more I looked, the stranger it got. I knew there was prior work on the question -- but I didn't know the magic words Google needed. Eventually I reinvented enough economic theory to connect my simple question to Coase's 1937 (!) theorem1970s work on 'the theory of the firm', Brad DeLong's 1997 writings on The Corporation as a Command Economy [1], and Akerloff's 'information assymetry'. [2]

Among other things I realized that modern corporations are best thought of as feudal command economies whose strength comes more from their combat capacity and ability to purchase legislators and shape their ecosystems than from goods made or services delivered.

Think of the Soviet Union in 1975.

All of which is, I hope, an interesting review -- but why did I title this 'Strange loop'?

Because I used that term in a 2008 post on how Google search, and especially their (then novel) customized search results, was changing how I thought and wrote. This five year recursive dialog is itself a product of that cognitive extension function.

But that's not the only strange loop aspect.

I started this blog post because today I rediscovered DeLong's 2007 paper [1] as a scanned document. I decided to write about it, so I searched on a key phrase looking for a text version. That search, probably customized to my Gordon-identity [3], returned a post I wrote in 2008. [4]

That's just weird.

 - fn -

[1] Oddly the full text paper is no longer available from Brad's site, but a decent scan is still around.

[2] There are at least two Nobel prizes in Economics in that list, so it's nice to know I was pursuing a fertile topic, albeit decades late.

[3] John Gordon is a pseudonym; Gordon is my middle name.

[4] On the one hand it would be nice if I'd remembered I wrote it. On the other hand I've written well over 10,000 blog posts. 

See also: 

Sunday, March 03, 2013

The canid domestication of homo sapiens brutalis

Eight years ago, I wondered if European Distemper killed the Native American dog and added a footnote on an old personal hypothesis ...

Humans and dogs have coexisted for a long time, it is extremely likely that we have altered each other's evolution (symbiotes and parasites always alter each other's genome). ... I thought I'd blogged on my wild speculation that it was the domestication of dogs that allowed humans to develop technology and agriculture (geeks and women can domesticate dogs and use a powerful and loyal ally to defend themselves against thuggish alphas) -- but I can't find that ...

 Happily, others have been pursuing this thought ....

We Didn’t Domesticate Dogs. They Domesticated Us Brian Hare and Vanessa Woods

...With this new ability, these protodogs were worth knowing. People who had dogs during a hunt would likely have had an advantage over those who didn't. Even today, tribes in Nicaragua depend on dogs to detect prey. Moose hunters in alpine regions bring home 56 percent more prey when they are accompanied by dogs. In the Congo, hunters believe they would starve without their dogs.

Dogs would also have served as a warning system, barking at hostile strangers from neighboring tribes. They could have defended their humans from predators.

And finally, though this is not a pleasant thought, when times were tough, dogs could have served as an emergency food supply. Thousands of years before refrigeration and with no crops to store, hunter-gatherers had no food reserves until the domestication of dogs. In tough times, dogs that were the least efficient hunters might have been sacrificed to save the group or the best hunting dogs. Once humans realized the usefulness of keeping dogs as an emergency food supply, it was not a huge jump to realize plants could be used in a similar way.

So, far from a benign human adopting a wolf puppy, it is more likely that a population of wolves adopted us. As the advantages of dog ownership became clear, we were as strongly affected by our relationship with them as they have been by their relationship with us....

The primary predators of humans, of course, are other humans. Women's need for protection against men is particularly acute. So which gender would be most interested in, and capable of, the domestication of a strong and loyal ally? What changes would a dog's presence make to a society and a species, and who would lose most when agriculture made dogs less useful?

Tuesday, January 29, 2013

High functioning schizophrenia: an academic's story.

"THIRTY years ago, I was given a diagnosis of schizophrenia."

That's a helluva way to start one of the most important NYT OpEd's of 2013 ...
Successful and Schizophrenic - ELYN R. SAKS NYTimes.com 
... I made a decision. I would write the narrative of my life. Today I am a chaired professor at the University of Southern California Gould School of Law... 
... Although I fought my diagnosis for many years, I came to accept that I have schizophrenia and will be in treatment the rest of my life. Indeed, excellent psychoanalytic treatment and medication have been critical to my success... 
... Over the last few years, my colleagues, including Stephen Marder, Alison Hamilton and Amy Cohen, and I have gathered 20 research subjects with high-functioning schizophrenia in Los Angeles.. 
... At the same time, most were unmarried and childless, which is consistent with their diagnoses. 
... in addition to medication and therapy, all the participants had developed techniques to keep their schizophrenia at bay. For some, these techniques were cognitive... 
... One of the most frequently mentioned techniques that helped our research participants manage their symptoms was work... 
... Personally, I reach out to my doctors, friends and family whenever I start slipping, and I get great support from them. I eat comfort food (for me, cereal) and listen to quiet music. I minimize all stimulation. Usually these techniques, combined with more medication and therapy, will make the symptoms pass. But the work piece — using my mind — is my best defense. It keeps me focused, it keeps the demons at bay. My mind, I have come to say, is both my worst enemy and my best friend... 
Elyn R. Saks is a law professor at the University of Southern California and the author of the memoir “The Center Cannot Hold: My Journey Through Madness.”
My freshman roommate developed what I believe was schizophrenia. He dropped out for years, then one day returned to school, completed a PhD and started working. I suspect he was not "cured", just as Elyn Sanks is not cured.

Whatever the limitations of the "schizophrenia" as a diagnostic label (they are many), we now know that a few people are able to manage around a grievous and terrible disability. They have shown that it can be done.

That's important. Remember Roger Bannister? He was one of the first Europeans to officially run a 4 minute mile (I suspect other humans had done it before). Before he did it, few tried. Now many men have done it, including one runner in his 40s. It's still hard to do, but it's not news any more.

Succeeding with schizophrenia is the psychic equivalent of running the four minute mile. Terribly hard to do, but once done methods can be refined, goals set, support provided, lessons learned.

Lessons that I suspect will be of value to many persons, not just schizophrenic and autistic adults, but also all inheritors of the 150,000 year old human mind; hacked together in a blink of Darwin's eye. The techniques used to manage severe psychic turmoil can also be used to manage the lesser afflictions we all experience.

Elyn Saks and fellow champions, we salute you.

See also:

Saturday, November 10, 2012

XMind: Impressions and comments on the mind mapping market

It's been two years since I first looked at XMind. During that time I used MindManager at work and experimented with MindNode Pro at home. I mostly use the tools to explore new terrain, and as a visual aide to some teleconferences (share the mind map while discussing).

MindManager wasn't ideal, but it was a decent tool when we could buy it for $100 or so. Their current pricing is too high for team use, and I really did want the option of sharing maps. So when I switched projects I also switched to XMind. I don't have time for a proper review, but I can share some bullet points on why I chose it, what it's like, and what I would love to see.

Why I chose XMind

  • It runs on Windows 7 and it's nice I can also use it on my Air.
  • Price: Free for a very solid version, upgrade to pro was $80 for me. I don't like free software, but we can't afford MindManager - so this freemium model is a good balance.
  • Longevity: It's been on the market for several years and just went through a significant update.
  • Quality: it's got bugs, but it's tolerable so far.
  • It's a simplified clone of MindManager so it has a good feature set.
  • The base version is "open source". A weak form of insurance, but could be worse.
  • Freemind lacks the corporate look and seemed a steeper learning curve for non-geeks.
Impressions, including problems
  • Data lock: The inevitable for all but Freemind
  • Java: The UI is native, but the back-end requires Java. That's bad enough on Windows, but for a Mac user Java installation feels like installing a malware-welcome sign.
  • There's no built-in Help, only web help.
  • It is slow to load what I consider a mid-sized map.
  • It is pretty reliable, but I have run into a significant bug with string search. Search sometimes fails unless the map is fully expanded.
  • It's made in China, and the language localization is imperfect. "Extend" is used in place of "Expand" for example, and the mouse-over tooltip text is quaint.
Thoughts on the mind map / concept visualization marketplace
 
I've seen cognitive-support apps come and go for twenty years, and I don't think we're making much progress. We're shuffling in place. This definitely isn't a technology problem -- we had similar apps running on the computing-equivalent of medieval tech. I don't think it's due to lack of imagination, though that has occurred to me. I think it's a business problem -- the market for high-end cognitive-extension concept modeling software is tiny; probably not more than 1 in 10,000 adults, perhaps 300,000 worldwide on all computer platforms. If we then ask how many can/will pay $30 a year for a product … we're talking a modest income stream for 1-2 developers owning a world market.
 
Yeah, this is a business problem. So we're not going to get what I want through traditional market-driven mechanisms. We're going to have to figure a way to grow something from modest means, and it's going to have to be built atop something else.
 
So here's how I think it could work. Start with the standard data formats used in other apps like Notational Velocity for the nodes. That means UTF-8 including "plain text", RTF, and markdown with a simple title, tag, date/time and text metadata model. That way the "nodes" can live in a simple Spotlight/Windows Search indexed folder and can be used by SimpleNote or Dropbox.
 
Now put the graph structure as XML or XMLized RDF in just another note in the same folder with a special name.
 
Optionally, allow the folder to contain other files, images, and so on (future).
 
That's the data. Now the app reads in the RDF and the nodes and renders the relationships. Ideally many different apps work with the same data structure. There's very little income here, so we're taking labor-of-love with a bit of cash to pay for a new computer. From this base, over time, with full data portability, we can slowly build a concept-visualization ecosystem with full data freedom.
 
Anyone have other ideas?

See also:

Tuesday, October 30, 2012

Usability of electronic health records: test cognitive cost first

Obama raised mileage standards for my industry.

Ok, so it wasn’t him personally, and it’s not mileage, and I don’t exactly own the health care “IT” industry. Even so, I can better imagine now what it was like to work for GM in the 70s when mileage standards were first set.

For my industry the ‘mileage standards’ are known as ‘meaningful use’, as in MU1, MU2 and MU3. Despite the confusing name these are effectively increasingly stringent performance standards for electronic health records, akin to mileage and emission standards for automobiles. They’re reshaping the industry, sometimes for better and sometimes for worse. (Should we, for example, measure the value of all of our measuring before we do more measuring?)

The industry has moved through MU1 and is now digesting MU2 with MU3 on the horizon (assuming Obama wins, though Gingrich was a great fan of this sort of thing.) MU3 is still under construction, but one consideration is the inclusion of ‘usability standards’.

For various reasons I’m not thrilled with the idea of setting usability standards, but the term is broad enough to include something I think we really ought to study: The impact of complex clinical documentation and workflow systems on the limited cognition and decision making budget of the human brain…

image

I’ve written about this before …

Gordon's Notes- Electronic health record use and physician multitasking performance 4/2010

Llamas and my stegosaurus: Living with a limited brain
Some interesting research has come out recently about the processing capacity of brains. For example, that the medial prefrontal cortex can only handle two tasks at once, or that working memory can only handle about 7 items at a time (but what's an item?), or that when people are actively trying to remember something complicated, their impulse control is reduced…

Since then this topic has gotten a bit more attention, particularly from a study of Israeli judges …

Do You Suffer From Decision Fatigue- - NYTimes.com 8/2011

… There was a pattern to the parole board’s decisions, but it wasn’t related to the men’s ethnic backgrounds, crimes or sentences. It was all about timing, as researchers discovered by analyzing more than 1,100 decisions over the course of a year. Judges, who would hear the prisoners’ appeals and then get advice from the other members of the board, approved parole in about a third of the cases, but the probability of being paroled fluctuated wildly throughout the day. Prisoners who appeared early in the morning received parole about 70 percent of the time, while those who appeared late in the day were paroled less than 10 percent of the time…

… Decision fatigue helps explain why ordinarily sensible people get angry at colleagues and families, splurge on clothes, buy junk food at the supermarket and can’t resist the dealer’s offer to rustproof their new car. No matter how rational and high-minded you try to be, you can’t make decision after decision without paying a biological price….

… These experiments demonstrated that there is a finite store of mental energy for exerting self-control. When people fended off the temptation to scarf down M&M’s or freshly baked chocolate-chip cookies, they were then less able to resist other temptations….

Patient care is an endless series of decisions (though over time more behavior, for worse and for better, becomes automatic). All physicians start with a cognitive budget for decision making, and every decision depletes it. Unfortunately using an EHR also consumes decision making capacity – perhaps far more than use of a paper records. There’ve been a few studies over the past fifteen years hinting at this, but they’ve gone largely unnoticed.

So, if we’re going to study ‘usability’, let’s specifically study the impact of various electronic health records on cognitive budgets. We now know how to do those experiments, so let’s put some of that MU3 money to good use, towards supporting tools that enable better decisions – because they’re less tiring.

Think of it as meeting mileage standards through aerodynamic design.

Tuesday, September 18, 2012

Tachytely and human evolution: implications for the Drake Equation and Fermi Paradox

I haven't done a Drake/Fermi Paradox post for ages. A lot has happened in the meantime; in particular estimates of the number of potentially-life-compatible planets in our galaxy has grown exponentially.

Of course not all life supporting planets will develop sentient tool using species. Unless there's something about sentience and tool use feedback loops that produce tachytelic development. That would boost the Drake estimate into the low thousands. We ought to be tripping over little green things.

But we don't. Of course if technological civilizations all self-destruct quickly this would all make sense.

Monday, September 03, 2012

Do evolutionary strategies evolve?

Biologists study evolutionary "strategies", such as r and K selection.

These are the strategies deployed the Great Programmer as she fiddles with the game states of the multiver... erkkk. Just kidding. These are, of course, human terms for the emergent phenomena of natural selection.

At a more granular level, a predator's niche might be contested on the basis of bigger teeth, stronger claws, faster moves, greater endurance, or bigger brains.

Likewise, microbes, who rule the earth, have a range of "strategies". Symbiosis, parasitism, fast reproduction, encysting and so on.

Presumably the catalog of strategies changes over time. Before there were teeth, big teeth strategies were not available.

Before there were neurons, big brain strategies didn't work.

So that leads to the obvious question, do evolutionary strategies evolve? That is, do new strategies emerge from variations of strategies such that the strategies themselves are subject to selection pressure (a sort of meta-selection I suppose)?

Seems an obvious question, but as of Sept 2012 Google has 9 hits on that precise phrase, none by biologists.

So I guess it's an obvious question, but maybe obviously dumb. I'm surprised though that I didn't find a blog post explaining why it's dumb.

(A bit of context, this came up in a discussion with my 13yo about what species would fill our ecological niche (global multicellular apex predator). Having hit upon the strategy of investing in brains, would natural selection keep returning to the theme?)

Monday, August 20, 2012

How much of America's healthcare crunch is dementia care?

US healthcare costs were 2.6 trillion in 2010; about 18% of the 2011 US economy. Of that, dementia care costs about $200 billion, or about 8% of our total health care bill.

Demographics, and our failure to prevent brain deterioration, means dementia costs will grow. Since demented patients often exhaust all personal and family financial resources, these costs will show up as medicaid expenditures.

Even so, dementia is less of a problem than I had long thought. Even if costs were to increase by another 50% over the next decade, it still wouldn't break the bank.

Faced with the facts, I'm now forced to examine my unexamined assumptions. I can now imagine why dementia might turn out to be a bit of a bargain.

Many, if not most, dementia patients no longer receive aggressive medical care. They do need hands-on care, but in the modern economy there's no lack of people reasonably happy to do that work for comparatively little money. Demented people don't eat that much, and they don't require costly ingredients or food preparation. They don't demand the latest gadgets or costly bandwidth or cutting edge architecture or modern art on the walls. They can live where land is cheap.

In many ways, demented people are cheaper to maintain than non-demented people of similar ages. Given that neither produce wealth, from an economic accounts perspective dementia might be a money-saver.

Even as our dementia population grows, increasing costs may be offset by advances in robotics and remote monitoring, and, in time, by widespread acceptance of euthanasia [1].

Of course dementia and pre-demential can bankrupt individual families, but in our income skewed economy those bankruptcies don't add up to all that many billions.

To answer my title question then, dementia care does not appear to be a uniquely large part of our healthcare crunch. Obesity, for example, may be more important.

That's too bad, because many of us have a personal interest in a business case for dementia prevention...

[1] I want my kids to have a robust financial incentive to pull the off switch on my future demented self.

Sunday, July 29, 2012

Poverty in the west

For much of human history slavery, rape, abuse of children and women, heavy drinking, murder, cruelty, and animal torture were commonplace and accepted.

Not so much now, at least in wealthy nations. Humans are immensely imperfect and prone to regression, but we are better than we were. Progress happens.

Progress happens, but then the bar goes up. We clean the air of LA and the acid rain of the Northeast, so we get global CO2 management as our next assignment. We work through a chunk of our racist and genocidal history, and we get to work on gay marriage. Fifty years from now we won't eat animals. And so it goes.

Poverty elimination is also on the list. Might be an even harder problem than CO2 emissions. The good news is that worldwide poverty is improving very quickly...

US intelligence agency sees world poverty in sharp drop, rising fight for resources by 2030 - The Washington Post

Poverty across the planet will be virtually eliminated by 2030, with a rising middle class of some two billion people pushing for more rights and demanding more resources, the chief of the top U.S. intelligence analysis shop said Saturday.

If current trends continue, the 1 billion people who live on less than a dollar a day now will drop to half that number in roughly two decades, Christoper Kojm said...

I don't think 'virtually eliminated' means what Kojm thinks it means - but this is good news all the same.

The bad news is that poverty in America isn't going away.  Peter Edelman runs the numbers  on our brand of poverty ...

Why Can’t We End Poverty in America? - Peter Edelman - NYT NYT

... The lowest percentage in poverty since we started counting was 11.1 percent in 1973. The rate climbed as high as 15.2 percent in 1983. In 2000, after a spurt of prosperity, it went back down to 11.3 percent, and yet 15 million more people are poor today...

... We’ve been drowning in a flood of low-wage jobs for the last 40 years. Most of the income of people in poverty comes from work. According to the most recent data available from the Census Bureau, 104 million people — a third of the population — have annual incomes below twice the poverty line, less than $38,000 for a family of three. They struggle to make ends meet every month.

Half the jobs in the nation pay less than $34,000 a year, according to the Economic Policy Institute. A quarter pay below the poverty line for a family of four, less than $23,000 annually. Families that can send another adult to work have done better, but single mothers (and fathers) don’t have that option. Poverty among families with children headed by single mothers exceeds 40 percent.

Wages for those who work on jobs in the bottom half have been stuck since 1973, increasing just 7 percent...

Addressing these problems will be challenging. Children are very expensive in a post-industrial society, yet much of American poverty is concentrated in father-free families managed by a single mother. Their poverty would be easier to manage if they had made different fertility choices; simplistic income subsidies could incent politically unsustainable behaviors.

Fortunately there are strategies which eliminate perverse incentives. Tying income to managed work, providing health and child care (including easy access to contraception), and quality educational programs alleviate poverty and provides the means and incentives to make thoughtful fertility choices.

A different slice of our poverty comes from a mismatch between post-industrial employment and human skills. This isn't going a way, 3D printing of manufactured goods will do to manufacturing what full text search did to the law. Meanwhile six percent of Americans suffer from a serious mental illness every year and twenty-five percent of Americans have a measured IQ less than 90. Given changes in technology, and the automation of many jobs, is it conceivable that 20% of Americans are relatively disabled?

Again, the strategy for this community is subsidized work -- the same strategy used for the "special needs" community. (Since I won't get to retire ever, I assume I'll be in this community sooner or later.) 

We know what we need to do. We even know where the money will come from -- from taxing CO2 emissions, financial transactions, and the 5% (ouch).

Sooner or later, we'll do it.

See also:

Tuesday, July 10, 2012

Is labor lumpish in whitewater times?

Krugman is famously dismissive about claims of structural aspects to underemployment (though years ago he wasn't as sure). DeLong, I think, is less sure.

Krugman points to the uniformity of underemployment. If there were structural causes, wouldn't we see areas of relative strength? It seems a bit much to claim that multiple broad-coverage structural shocks would produce such a homogeneous picture.

Fortunately, I fly under the radar (esp. under Paul's), so I am free to wonder about labor in the post-AI era complicated by the the rise of China and India and the enabling effect of IT on financial fraud. Stories like this catch my attention ...

Fix Law Schools - Atlantic Vincent Rougeau  Mobile

... the jobs and high pay that used to greet new attorneys at large firms are gone, wiped away by innovations such as software that takes seconds to do the document discovery that once occupied junior attorneys for scores of (billable) hours while they learned their profession..

Enhanced search and discovery is only one small piece of the post-AI world, but there's a case to be made that it wiped out large portions of a profession. Brynjolfsson and McAfee expand that case in Race Against the Machine [1], though almost all of their fixes [1] increase economic output rather than addressing the core issue of mass disability. The exception, perhaps deliberately numbered 13 of 19, is easy to miss ...

13. Make it comparatively more attractive to hire a person than to buy more technology through incentives, rather than regulation. This can be done by, among other things, decreasing employer payroll taxes and providing subsidies or tax breaks for employing people who have been out of work for a long time. Taxes on congestion and pollution can more than make up for the reduced labor taxes.

Of course by "pollution ... tax" they mean "Carbon Tax" [1]. The fix here is the same fix that has been applied to provide employment for persons with cognitive disabilities such as low IQ and/or autism. In the modern world disability is a relative term that applies to a larger population.

If our whitewater times continue, we will either go there or go nowhere.

[1] They're popular at the "Singularity University" and their fixes are published in "World Future Society". Outcasts they are. Their fan base probably explains why the can't use the "Carbon" word, WFS/SU people have a weird problem with letter C. 

See also:

Thursday, July 05, 2012

Google's Project Glass - it's not for the young

I've changed my mind about Project Glass. I thought it was proof that Brin's vast wealth had driven him mad, and that Google was doing a high speed version of Microsoft's trajectory.

Now I realize that there is a market.

No, not the models who must, by now, be demanding triple rates to appear in Google's career-ending ads.

No, not even Google's geeks, who must be frantically looking for new employment.

No, the market is old people. Geezers. People like me; or maybe me + 5-10 years.

We don't mind that Google Glass looks stupid -- we're ugly and we know it.

We don't mind that Google Glass makes us look like Borg -- we're already good with artificial hips, knees, lenses, bones, ears and more. Nature is overrated and wears out too soon.

We don't mind wearing glasses, we need them anyway.

We don't mind having something identifying people for us,  recording where we've been and what we've done, selling us things we don't need, and warning us of suspicious strangers and oncoming traffic. We are either going to die or get demented, and the way medicine is going the latter is more likely. We need a bionic brain; an ever present AI keeping us roughly on track and advertising cut-rate colonoscopy.

Google Glass is going to be very big. It just won't be very sexy.

Wednesday, June 27, 2012

Google's A.I. recognizes cats. Laugh while you can.

Google's brain module was trained on YouTube stills. From vast amounts of data, one image spontaneously emerged ...
Using large-scale brain simulations for machine learning and A.I. | Official Google Blog 
".. we developed a distributed computing infrastructure for training large-scale neural networks. Then, we took an artificial neural network and spread the computation across 16,000 of our CPU cores (in our data centers), and trained models with more than 1 billion connections.  
...  to our amusement, one of our artificial neurons learned to respond strongly to pictures of... cats ... it “discovered” what a cat looked like by itself from only unlabeled YouTube stills. That’s what we mean by self-taught learning... 
... Using this large-scale neural network, we also significantly improved the state of the art on a standard image classification test—in fact, we saw a 70 percent relative improvement in accuracy. We achieved that by taking advantage of the vast amounts of unlabeled data available on the web, and using it to augment a much more limited set of labeled data. This is something we’re really focused on—how to develop machine learning systems that scale well, so that we can take advantage of vast sets of unlabeled training data.... 
... working on scaling our systems to train even larger models. To give you a sense of what we mean by “larger”—while there’s no accepted way to compare artificial neural networks to biological brains, as a very rough comparison an adult human brain has around 100 trillion connections.... 
..  working with other groups within Google on applying this artificial neural network approach to other areas such as speech recognition and natural language modeling."
Hah, hah, a cat. That's so funny. Unless you're a mouse of course.

The mouse cortex has 14 million neurons and a maximum of 45K connections per neuron, so ballpark estimate, perhaps 300 billion connections (real estimates are probably known from the mouse connectome project but I couldn't find them). So in this first pass Google has less than 1% of a mouse connectome.

Assuming they double the connectome every two years, they should hit mouse scale in nine years, or around 2021. There's a good chance you and will still be around then.

I've long felt that once we had a "mouse-equivalent" connectome we could probably stop worrying about global warming, social security, meteor impacts, cheap bioweapons, and the Yellowstone super volcano.

Really, we're just mice writ large. That cat is looking hungry.

Incidentally, Google didn't use the politically incorrect two letter acronym in the blog post, but they put it, with periods (?), in the post title.

Sunday, May 27, 2012

A millennia of European history in six bullet points

A thousand years of European History - special needs history version ...

  • 1000 Middle Ages. Lots of small Kingdoms and local rulers. Church very powerful. Terrible Black Plague wipes out much of Europe. 
  • 1500 Renaissance and Protestant Reformation. Knowledge from ancient Greece and Rome and from China and India and the Middle East comes to Europe. New World “discovered” by Europeans. Catholic church loses control of power during Protestant Reformation. 
  • 1600 Scientific Revolution Late in the Renaissance Europe invented the idea of Science. That changed the way people thought about the world and how they made things. 
  • 1700 The Enlightenment Machines and ideas traveled around the world and caused Revolutions. 
  • 1800 The Industrial Age The steam engine and other machines meant that animals and human muscles weren’t as important. The world population started to grow very quickly. Energy was important. 
  • 1950 The Modern Age Today machines are starting to replace or extend the human brain. We don’t know what to call this age.

I'll update the PDF later today. When it's done I'll do an ePub version too.

See also:

Saturday, May 26, 2012

Euthenasia will come to America within the next twenty years

Thirty years ago I was distressed by the NIH's relative disinterest in demential research. Anyone who could do arithmetic knew what was coming; the time for major action was 1982.

Now we have an "urgent" NIH program focusing on dementia [1] -- but it's 25 years too late. Post-boomers will face a deluge of former-people whose bodies outlast their brains. You'd call us Zombies, except that there will be a cure of sorts ...

Parent Health Care and Modern Medicine’s Obsession With Longevity -- Michael Wolff - New York Magazine

... after due consideration, I decided on my own that I plainly would never want what LTC insurance buys, and, too, that this would be a bad deal. My bet is that, even in America, even as screwed up as our health care is, we baby-boomers watching our parents’ long and agonizing deaths won’t do this to ourselves. We will surely, we must surely, find a better, cheaper, quicker, kinder way out.

Meanwhile, since, like my mother, I can’t count on someone putting a pillow over my head, I’ll be trying to work out the timing and details of a do-it-yourself exit strategy. As should we all.

Things that can't go on don't. One way or another, America will figure out how to shorten the duration of Boomer dementia. My own plan is to buy a cottage by a cliff with no railings.

[1] "Better treatments by 2025", a meaningless goal that is sure to be met. Funded with $50 million, or what modern CEOs make every four months. Wake me up when it's funded with $50 billion.

Saturday, May 19, 2012

Who were the crazy genius scientists?

Which famous scientists and/or mathematicians were also "crazy" (e.g. - far outside behavioral "norms") during their adult productive lives (excluding those, like Pauling, who became eccentric at an age where dementia is common)?

My current list is ...
  1. Newton: Perhaps autism spectrum, but he was so brilliant, and so bizarre, that he's untypable. He's outside of the human range. He may have hard mercury poisoning late in life, or perhaps a late-onset schizophrenia-like psychosis.
  2. John Nash: paranoid schizophrenic, though somewhat late-onset. His recovery is remarkable, as was Newton's -- but he was psychotic for a longer time period.
  3. Kurt Godel: schizotypal, later in life delusional beliefs with paranoid features.
  4. Nikolai Tesla: OCD, Autism spectrum?
  5. Henry Cavendish: social phobia, anxiety disorder.
  6. Boltzmann: bipolar disorder (classic)
Our classifications of mental illness are pretty weak even in normal IQ adults; this group is probably unclassifiable. Who else should be on the list?

Update: Philip K Dick wasn't quite in this group, but his late-onset pyschosis experience resembles Tesla's. Matt suggested Godel and Boltzmann. The pattern of schizotypal personality disorder behaviors with late-onset deterioration or psychosis might apply to Tesla, Newton and Godel. Botlzmann and Nash had more classic neurospychiatric disorders.

These are most extraordinary minds. It would not be surprising if they had extraordinary dysfunctions.

Update 6/7/2012: An academic opinion.