Category Archives: Philosophistry

The ‘Person of Interest’ Machine – The Air Force wants it!

If you are fan of spy-ish adventure show, you have probably given CBS “Person of Interest” a gander. If not, its decent to pretty good, depending on the episode. On my DVR list. The premise is that a reclusive Billionarie tech wizard known only as “Finch” (Played by Lost’s Benjimin Linus – actor Micheal Emerson)buidls a ‘machine’ for the government that integrates all forms of surveillence across the country with breakthrough artificial intelligence to identify “Persons of Interest” – people who are highly probable to be perpetrators or victims based on their activities. Disturbed that the government was going to only act on output from “the Machine” that had national security implications, our genius put a back door in teh machine that sent him the social security numbers of the highest liklihood “persons of interest”.

Enlisting former CIA operative Reese (the show likes singular names…played by Jim Caveizal) as his invetigator/action officer, each episode follows the formula of getting a name, figuring out if they are a good guy or bad guy, and either protecting them or bringing them to justice (sometimes “with extreme prejudice”…).

The lead ins from each commecial break show “the machine eye view” of the world with reticles tacking faces and text of conversations, texts and if close enough lip reading on the screen. This somewhat dystopian segue adds an interesting context to plot that hashes out.

Enter the Air Force. I’m not sure what it is about the Air Force culture, but they have a penchant for thinking technology can cure every ill. Suffering a rather bloddy rebuke from a Marine General at JFCOM regarding their darling “Effects-based warfare” a few years back, they seem to believe that the problem of not being able to know what it is the bad guys are going to do means they just have to create a technology that can tell them beforehand what the bad guys are going to do. Bouyed by snake-oil salesman touting “predictive intelligence”, the Air Force has dabbled for years with notions of how such a capability would return their Newtonian Effects-based warfare ideas to primacy in a world where “what happens next” is stuck inside peoples heads.

This piece in Wired’s Danger Room discusses the desire of the chief Scientist of the Air Force to create Finch’s Machine. At least in part. calling it “Social Radar” he talks about it “seeing into the hearts and miinds of people”.

Does he really mean it? Is he serioously proposing a mind-reading machine? Appears so. “Don’t just give me a weather forecast, Air Force, give me an enemy movement forecast.’ What’s that about? That’s human behavior. And so [we need to] understand what motivates individuals, how they behave.”

And if you question if the vision really rivals the scale and pervasiveness of Finch’s Person of Interest Machine. Dr. Maybury describes his “Machine” thus:

Using biometrics, Social Radar will identify individuals, Maybury noted in his original 2010 paper on the topic for the government-funded MITRE Corporation. Using sociometrics, it will pinpoint groups. Facebook timelines, political polls, spy drone feeds, relief workers’ reports, and infectious disease alerts should all pour into the Social Radar, Maybury writes, helping the system keep tabs on everything from carbon monoxide levels to literacy rates to consumer prices. And “just as radar needs to overcome interference, camouflage, spoofing and other occlusion, so too Social Radar needs to overcome denied access, censorship, and deception,” he writes.

The paper opines that radar “provided a superhuman ability to see objects at a distance through the air., and connects the dots to the need for a new superhuman capability:

“Accordingly, a social radar needs to be not only sensitive to private and public cognitions and the amplifying effect of human emotions but also sensitive to cultural values as they can drive or shape behavior.”

Goig further:

For example, radar or sonar enable some degree of forecasting by tracking spatial and temporal patterns (e.g. they track and display how military objects or weather phenomena move in what clusters, in which direction(s) and at what speed.) A user can thus project where and when objects will be in the future. Similarly, a social radar should enable us to forecast who will cluster with whom in a network, where, and when in what kinds of relationships.

You can read the entire paper here

Of course, I’m sure it will filter out such information about U.S. citizens, and will only act on it in dire circumstances of national security. That dad gum Constitution and all..

We can only hope that the rest of the info is only available the altruistic likes of Finch and Reese…

Advertisements

Test Your scientific literacy!

This quiz at the Christian Science Montior covers a wide range of topic. Be careful I jumped at couple answers without fully RTFQ and got 3 wrong… But I knew them 😉

Downloadable skillz…sci fi dream or ???

(hat tip to Anne Johnson) This medical Express.com article describes experiments that indicate the possibility of manipulating neuo-pathways using magnetic resonance techniques to improve specific features of visual performance.

While just scratching the surface, the technique could open an ethical Pandora’s box as the subject was not aware of the specific behavior that they were being taught (programmed??) to do. After the “training” the subjects showed a discernible improvement in performance of the targeted task. Might be able to improve the ability of a sonar tech to differentiate false targets from real? help surgeons realte diagnostic imagery to what they see in the patient, etc, etc.

On the upside, the decoded neuro-feedback method has tremendous implications in memory, motor and rehabilitation. On the downside it acts on the subject as a kind of hypnotic suggestion that the subject is not aware of, with untold potentially nefarious applications…

Allen vs Kurzweil in the battle of the Singularity

This opinion pieceby Paul Allen argues the singularity (the point where computing power surpasses human brain power) is not very near, if possible. They base their criticism on Kurzweils law of Accelerating Returns, which assumes that computing power development will undergo substantial acceleration before slowing to assume the “S” curve all development eventually exhibits. SOme of Kurzweils writing seems to question if computing power development will EVER “S” curve into decline, since once it is taken over by synthetic intelligence, it will act more like a nuclear chain reaction, than past technology development. Allen doesn’t buy this. Additionally he invokes “the Complexity Brake” that questions whether a complex adaptive system like the human brain can be “understood” in the usual sense of the word.

Kurzweil responds here He starts off unfortunately ad homimen criticizing Allen rather than his arguments and makes a sort of “appeal to his own authority” based on assuming Allen has not sufficiently studied his work. He simply repeats his arguments after claiming Allen is unaware of them rather than dealing with them. Kurzweil does have some good arguments, but ultimately we have the argument that ever exponential growth scenario eventually “S-curves out”, by Allen, and “except this one” by Kurweill. Both claim empirical evidence on their side. Kurzweil is correct that so far the “large S-curve” of his Law of Accelerating Returns” is composed of finer scale S-Curves that are working over shorter and short timescales. Allen is correct that this type of behavior is not unprecedented, and that ulimately the “macro level” S-curve flattens out.

Kuzrweil gets on thin ice when he criticizes thee “complexity Brake” by stating:

Allen’s statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome.

This declaration against the emergence of information content in a complex adaptive system is puzzling coming from someone who relies on this very thing happening for the singularity to occur. Self-replicating machines that increase in complexity requires that the “design” of this increasingly complex system of machine intelligence would have to arise from a lessor amount of initial information. Since we are only getting at the tip off the epigenic information iceberg, the claim that these interactions do not add to the information in genome is inexplicable.

The thing that neither deals with is what I think may be the uncross-able divide is that of the machine paradigm being digital while the neurons of the brain have electro-chemical analogue characteristics. I am attracted to (but treat as pure speculation) the notion that in addition to an analogue component there could also be a quantum mechanical component. The philosophical argument of the origin of free will and the seat of consciousness gets into some heady stuff there, but notion that biological systems can play games with quantum superposition of information adds a level to human consciousness, that require a fundamentally different technology than digital circuitry to deal with on level deeper than mathematical calculation.

For more on that see Stuart Kauffman on the topic