Hamiltonian at Oxford
Thursday, April 20, 2017
A solution to Mary's Room (Knowledge argument)
Mary's room is a philosophical thought experiment that goes as follows: Mary exists in a monochromatic room but she studies neurobiology and science until she knows absolutely everything about the physical/material process of colour perception in human beings and about the wavelengths emitted by everything she will ever come across. She then leaves the room and sees colour outside, does she gain anything from this action?
To most people it feels as though she does, but that is just intuition, which is notoriously unreliable. In order to answer the question we have to make some assumptions; specifically we need to make an assumption as to the nature of a human mind. If a human mind is some sort of magic entity separate from material reality then it is likely she does gain something from being able to see colour because she could not know how her own magic-mind-stuff will react to seeing colours purely from physical/material information. However, if a human mind is completely an emergent property of the physical/chemical/biological stuff that makes up a human brain then she does not gain anything from being able to see colour.
My argument:
If Mary knows absolutely everything about the physical process of colour perception in a human being then she can use this information to imagine a human being seeing colour. She can not only imagine this but her mental simulation of a human being seeing colour can be completely accurate and contain every detail of a human being seeing colour including all the activity of the human's brain. For argument's sake let's make her simulated human "sim-Mary" an exact duplicate of herself only outside seeing the colours she knows every detail about.
Now what can we say about sim-Mary? sim-Mary's brain must produce an identical copy of Mary's mind because we've assumed that minds are purely a product of the physical stuff that make up a brain and Mary knows everything about that physical stuff so can imagine an exact copy of all of that stuff thus an identical mind (otherwise there would be a contradiction). Thus sim-Mary and sim-Mary's mind must react precisely the same way as Mary & Mary's mind to seeing colours outside (otherwise Mary doesn't know something about how colours are perceived & hence a contradiction). Therefore Mary know exactly how she will react and experience colour even before she steps outside the monochromatic room.
Hence Mary gains absolutely nothing from leaving the room because she has effectively already done so completely within her own mind.
The flaw:
There is one major flaw in my argument, which reveals that the very question is flawed. If Mary can imagine every detail of her own mind & perception within her own mind then this creates an infinite recursion because sim-Mary could imagine a sim-sim-Mary imagining a sim-sim-sim-Mary imagining a sim-sim-sim-sim-Mary etc.... As a result Mary's mind and imagination must be infinite in it's memory and computational capacity. A human being's brain is not infinite in its capacity so it cannot learn all the information about itself. So unless the perception of colour requires only a tiny fraction of one's brain/mind the premise of Mary's Room is impossible; Mary simply cannot know everything about how colour perception occurs in human beings.
Saturday, April 1, 2017
We are not on the brink of uploading ourselves into computers.
Science fiction and "transhumanists" love the idea that humanity is on the brink of forsaking our frail human bodies and uploading ourselves to the great digital universe. However, this is absolute nonsense spouted by tech-heads who don't have the slightest clue just how mind-boggling complex a human brain is; and the physical limits our current electronic technology is about (or in some cases already has) hit. Nobody can break the laws of physics no matter how much we might want them to.
Approximately one year ago technology manufacturers acknowledged the end of Moore's law (the trend of processing power doubling every two years). Indeed for several years before that announcement it was necessary for all the manufacturing giants to collaborate together in order to keep up with Moore's law. And current microprocessor technology has already hit a physical limitation due to heat production. You may have noticed that suddenly computers & laptops all started having dual-core processors, this was the work around manufacturers came up with to keep up with Moore's law despite the physical impossibility of making more compact chips.
With this in mind, consider the power of the current top of the line chips. The largest commercially available single-chip processor packs in a whopping 7.2 billion transistors. But how does that compare to the human brain?
The human brain contain approximately 100 billion neurons (10^11), more than ten-times the number of transistors in these approaching-the-laws-of-physics chips. But a single neuron is way more complex than a single transistor. A transistor can only have two states: on and off and have very limited inputs & outputs. Whereas neurons can send & receive hundreds of different chemical signals with thousands of other neurons. So let's consider instead that a single transistor can service as a single synapse (interface where two neurons communicate with each other). Well it's been estimated that there are approximately 1 quadrillion synapses (10^15). So we're talking >100,000 top of the line processors to model a single human brain.
But again a single synapse is much more complex than a single transistor as it consists of hundreds of different proteins and thousands of copies of each one not to mention all the proteins involved in transmitting signals between different synapses. As a rough approximation the human brain weighs ~1,500g of which 75% is water. Leaving ~375g of not-water which we'll assume is at least 50% protein (188g). A single protein molecule weighs approximately 10^-19 g, So there are approximately 10^21 protein molecules in a typical human brain.
Whether it is necessary to model each of those molecules in order to recapitulate human consciousness is currently not known. Many would argue that it is only necessary to model the network of neurons to recreate human consciousness, but then others argue the quantum state of particular proteins hold the key to consciousness. But regardless of whether it is 10^15 synapses or 10^21 proteins or a combination of both, it will take an entire server farm to imprecisely model a single human brain for the foreseeable future.
Using estimates of Google's computer power by Randall Monroe and the extremely over simplified 1 transistor per synapse model, then all of Google could model around 2,000 human brains. But that's not counting back-ups because in the same article it was estimated Google has a drive fail every few minutes. So each of our modeled human brains will have a failure every two days, and consume hundreds of kilowatts of electricity. Suddenly, a blob of goo the size of a cabbage which lasts for 60+ years using only 2,000 kJ of energy per day (0.02 watts) is looking pretty good.....
Sunday, December 6, 2015
WTF Happened in the Doctor Who finale? (Spoilers!)
This season's finale may have done what I thought was impossible, be worse than last season's finale (why the fuck are the UK newspapers praising it?). So this was really a three-part story where the ending totally destroys all the stakes across the story.
But lets start at the beginning with the "Face of the Raven". Maisie Williams is running a refugee shelter for aliens in the middle on London. She fakes a murder of one of her residents to justify sentencing a young guy to death (using a count-down tattoo) because she somehow knows he was vaguely involved in another Dr. Who escapade and has the Doctor's phone number. But for some reason she also wipes the guy's memory so he, Clara and the Doctor can spend time trying to figure out what is going on. Part way through investigating the crime to prove the guy is innocent so Maisie will remove the death count-down Clara decides to transfer the death sentence to herself because she thinks this will "give them more time" even though the count down timer doesn't change at all so really this does nothing at all. Ok Clara is pretty stupid so whatever. Eventually they figure it all out and Maisie reveals it was all just a trap for the Doctor - later revealed to be somehow orchestrated by the Gallifrey government to extract information from the Doctor. Maisie agrees to remove the death-count-down but can't because it has been transferred to Clara, because reasons.... So we get a long scene of Clara convincing the Doctor not to take revenge for her death because it's completely her own fault she is about to die (but he kind of does anyway). Eventually Clara dies pointlessly out of her own stupidity and everyone who hates her for being a rubbish character can celebrate.
The second episode begins with the Doctor re-materializing after being teleported by Maisie to what is eventually revealed to be his own confession dial. The Doctor runs away from a scary monster and reveals some prophecy about something called a Hybrid, child of two warrior races, that will stand in Gallifrey's ruins but by refusing to say more (it's later revealed he probably doesn't actually know any more) he gets himself stuck in a loop slowly digging through wall by punching it a few times each cycle to escape (despite there being a metal shovel, and numerous bone skulls which would probably be more effective). It is revealed the loop cycles several billion times but since the Doctor is reset after each cycle he actually only subjectively experiences the loop for a few days to a few months and can only infer from the stars (which are part of a simulated environment but for some reason are moving in real time?) that he has been stuck for billions of years. Oh and the Doctor misses Clara. It ends with the Doctor escaping the simulated reality and emerging in the middle of nowhere on Gallifrey.
The third episode is told as flashbacks while the Doctor tells the story to a character played by Jenna Coleman, who is later revealed is actually Clara, even-though at the end of the story he's supposed to have had all his memory of adventures with Clara erased (hence why he doesn't recognize that he is talking to Clara) so he shouldn't be able to remember this story at all... Anyway, the story starts with him pissed at the leaders of Gallifrey for imprisoning him in his confession dial (though later he is totally fine with Maisie William's part in that imprisonment). But he's in the middle of nowhere and goes home to what I think was his childhood house/barn (seen briefly in another episode) or maybe it's supposed to be the barn where he was going to let off the big bomb to destroy Gallifrey and the Daleks or maybe it's just supposed to be a random barn. However, even though it was the Gallifrey government's plan to use the confession dial to learn about the Hybrid they don't seem to have kept track of the dial's location and are surprised when the Cloisters go off (with the same noise used when the TARDIS exploded & destroyed the universe at the end of season 5) to signal that the Doctor has returned. The President of Gallifrey and some soldiers go to arrest/execute the Doctor but the soldiers consider the Doctor a war-hero, even-though he nearly destroyed all of Gallifrey then locked it in an alternate universe twice but its really been hiding at the end of the universe. Anyway the Doctor uses the loyalty of the soldiers to lead a military coup of the Gallifrey government (which presumably was elected at some point) and exiles the President ... somewhere. He then uses Gallifrey technology to rescue Clara moments before her death under the guise of getting information about the Hybrid from her. Next he murders the General who helped him with the coup after confirming he still had regenerations left as a distraction while he and Clara go hide in the Cloisters. The general was preventing them from doing so because this will cause cracks in the universe and cause the universe to be destroyed (a la season 5). In the Cloisters (a stone computer full of dead Time Lords and guarded by dead evil creatures) the Doctor reveals he's been there before and that is where he learnt about the Hybrid which lead him to steal a TARDIS and run away because he was scared. They repeat this adventure but not before the regenerated General tells Clara she has to go back to when she died or it will destroy the universe.
The Doctor and Clara run to the every end of the universe to ... something about creating a new timeline or break Clara's connection to the moment she died or whatever.. to "save Clara". This "saving Clara" doesn't work but they meet Maisie Williams who despite repeatedly forgetting who Clara was in past appearances and being less immortal than Jack Harkness is somehow the last living being in the universe (and still hasn't grown up) and knows everything about the Doctor and Clara and has been watching the stars burn out from underground in the Cloisters. She and the Doctor exchange accusations that each other is the Hybrid then she accuses the Doctor and Clara of together being the Hybrid because he is willing to destroy the universe to save Clara. The Doctor reveals he's going to Donna-Noble Clara (ie wipe her memories of him) and return her to Earth because ... he's gone too far and is risking destroying the universe to save her (which is all to do with his feelings for her not her feelings for him) or maybe it's to protect her from his enemies who would try use her knowledge of him (as he sort of says later). But wiping her memory won't stop the universe from being destroyed because she hasn't died when she was supposed to.
Clara eavesdrops on this part of the conversation and maybe somehow modifies the device he was going to use to do the mind-wipe to do the reverse (wipe the Doctor's memories of Clara). They agree to use the device with unknown consequences for some reason. It ends up wiping the Doctor's memories and Maisie & Clara drop him off in the middle of the desert in Nevada close-ish to where he dies in the "Impossible Astronaut" hence bringing us back to the cafe (which is the same one as in the "Impossible Astronaut" story line). But the Doctor still remembers all the adventures he had with Clara and the Doctor is now looking for her and his TARDIS (but somehow knows it has been moved from London despite still being in Nevada). Finally Clara leaves and reveals that the whole cafe was part of the camouflage of the second TARDIS the Doctor stole which is now crewed by Clara (now also sort of immortal) and Maisie. The two of them decide to fly off and have adventures leaving the Doctor with his old TARDIS to fly off looking for Clara. But really it doesn't matter because the whole universe should be ripping itself apart because Clara still hasn't died when she was supposed to.
So overall, Clara dies because she's stupid. She emotionally implores the Doctor to not take revenge but he does so anyway by staging a coup d'etat and exiling the Gallifrean President. The Doctor saves Clara despite knowing that saving her will cause the universe to be destroyed. The Doctor wipes his own memory of her because he is causing the whole universe to be destroyed to save Clara. But the memory wipe is pointless because he still remembers enough to be trying to find Clara. And Maisie and Clara run-off together in a TARDIS, which makes Clara and the Doctor separating pointless because the universe is still going to be destroyed because Clara still isn't dying when she is supposed to. So basically all of them are terrible people and all the sad emotional parts are completely pointless...
Saturday, December 5, 2015
Male and Female Brains
Hopefully all of you have heard about the recent PNAS article "debunking" the existence of male and female brains. I use the quotation marks here because there was never any substantial scientific evidence that male and female brains existed. Rather the concept of fe/male brains has arisen from the product of scientists trying to make their research more important than it really is and journalists trying to simplify complex science into an easily digestible sound bite.
Okay so now that we've established that scientists have known all along that you can't predict a person's gender from any individual brain/behaviour characteristic we can get to the current study. This new study tries to address the possibility that while each individual characteristic might not be a good predictor of gender perhaps they all correlate together so that "male brains" & "female brains" are cumulatively quite different from each other even if each individual difference is quite small and meaningless. This is similar to the difference between wolves & coyotes - there isn't a single distinguishing characteristic between the two rather several characteristics are combined to tell them apart. If this is the case then brains should be fairly consistent possessing all the male-ish characteristics and no female-ish characteristics or all female-ish and no male-ish characteristics.
This new study finds little consistency within brains, with the majority having various combinations male-ish and female-ish traits and only a tiny fraction (<10%) showing consistent male-ness or female-ness. Strikingly this was even true for the single behavioural study which finds gender to be a meaningful way to group individuals (as long as they are US undergraduate psychology students). That analysis shows that knowing someone's gender will typically let you correctly guess their interest (or lack of interest) in ~5 of the 10 most sexually-dimorphic activities.
All together this shows that gender is probably not a key axis of variation for either brains or behaviours and that the concept of a "male brain" and a "female brain" is meaningless. Hopefully this will mean these fields can move beyond gender to look at other potentially important axes of variation (eg. culture, genetics, geography).
Origin of the fe/male brain myth
When scientists present their result in a scientific paper they are trying to convince the reader of the truth of a specific finding. That means they present their data in the most convincing manner rather than the most truthful manner. So when scientists are trying to prove that male and female brains are different they almost exclusively present only averages over a large number of individuals rather than the whole distribution because this looks "cleaner" and makes the finding they are presenting the obvious to the reader (and hence more easy to publish). However this can also cause unsavy readers to think the differences are more important than they really are.
For instance this paper finds men have significantly bigger brains than women by approximately 90 cm^3 (or ~8% difference) which sounds like a lot. But this is just a difference in population means, if we look at the actual distributions of brain size for women and men in that paper (see below) it's obvious there is a lot of overlap so inferring the existence of a "male brain" and a "female brain" from this data makes as much sense as saying there exists a "male height" and a "female height".
Now that I've mentioned height let's look at another paper which finds significant difference in brain size between genders across all ages. They have a very convincing looking figure (see below left) to support this claim. But what is not obvious from the figure is that these are averages once again representing a total of 2,773 males and 1,963 females. This paper helpfully did the same analysis looking at the height of these individuals (below right) which makes it apparent that the gender-differences in brain size are no bigger (and may actually be smaller) than gender-differences in height.
Every single study I've come across finding differences in brain size/morphology or behaviour between men and women follows the same story. Using a study population more than 1,000 they find highly significant differences in the mean of characteristic X on the order of 3-10% (p < 0.05). Which in the discussion they then link to current sexist stereotypes (ie. that women are better at multitasking & emotion and men are better and logic, math, science). But if you figure out the population distributions men and women overlap for >80% of the range of values observed - meaning that it would be impossible to predict the gender of a person on the basis of the brain/behaviour pattern.
The New Study
Okay so now that we've established that scientists have known all along that you can't predict a person's gender from any individual brain/behaviour characteristic we can get to the current study. This new study tries to address the possibility that while each individual characteristic might not be a good predictor of gender perhaps they all correlate together so that "male brains" & "female brains" are cumulatively quite different from each other even if each individual difference is quite small and meaningless. This is similar to the difference between wolves & coyotes - there isn't a single distinguishing characteristic between the two rather several characteristics are combined to tell them apart. If this is the case then brains should be fairly consistent possessing all the male-ish characteristics and no female-ish characteristics or all female-ish and no male-ish characteristics.
This new study finds little consistency within brains, with the majority having various combinations male-ish and female-ish traits and only a tiny fraction (<10%) showing consistent male-ness or female-ness. Strikingly this was even true for the single behavioural study which finds gender to be a meaningful way to group individuals (as long as they are US undergraduate psychology students). That analysis shows that knowing someone's gender will typically let you correctly guess their interest (or lack of interest) in ~5 of the 10 most sexually-dimorphic activities.
All together this shows that gender is probably not a key axis of variation for either brains or behaviours and that the concept of a "male brain" and a "female brain" is meaningless. Hopefully this will mean these fields can move beyond gender to look at other potentially important axes of variation (eg. culture, genetics, geography).
Sunday, November 29, 2015
The Meaning of Science
A few months ago I finished reading The Meaning of Science by Tim Lewens and I haven't quite worked out what I would write about it. Largely this is because it contains two very distinct sections. The first describes & discusses what science is and what it is/is not science. The second discusses what science has to say on several "Big Questions" - altruism, nature vs nurture, and free will,
What is science
The first parts of the book are truly excellent, by far the best philosophy of science book I've read. The discussion on the popularity & limitations of the Popperian concept of falsifiability in science is nearly perfect. His discussion of Kuhn's paradigms is arguably better than Kuhn's own book. By that I mean his re-interpretation of Kuhn's ideas, which I'm unconvinced is actually what Kuhn believed having read The Structure of Scientific Revolutions myself, is more relevant to actual science & scientists.
The one major flaw of this section comes near the end of this section when Lewens deal with the veracity of scientific theories (while discussing the No Miracles argument). He like all other philosophers of science I have read (which is admittedly rather few) misses the importance of predictions or more generally evidence-after-the-fact. By which I mean the scientific evidence which supports a particular scientific theory that is gathered/observed after the theory has been postulated rather than before. The key here is that scientific theories do not appear at random from the empty space between a scientists ears, rather theories are inspired by existing scientific observations. It is inevitable that any proposed scientific theory will be supported by a large amount of existing scientific observations, thus there is a degree of circular reasoning to using existing scientific observations to "prove" a scientific theory is true. This may be less evident to outsiders because of the unspoken (but ubiquitous) scientific tradition of re-arranging the actual investigative process when preparing a paper for publication.
The format of scientific papers is the begin with background information which inspired the current study, in reality this section is usually written last and with the benefit of 20:20 hindsight. Rather many scientific projects begin with a relatively vague question/topic and early results are used as inspiration for hypotheses to test further. These further results are used to refine the hypothesis even more which is used to gather more results. Eventually this process converges on a relatively specific hypothesis/question which the accumulated results support/answer.
However, evidence-after-the-fact is free from this circularity thus making it much stronger evidence of the veracity of any scientific claim. This is why reproducibility (evidence-after-the-fact for one specific claim) is such an important measure of the quality of scientific work, and hence why the "crisis of reproducibility" is such a big deal. This is why the No Miracles argument is so popular among scientists; as demonstrated by Bill Nye's frequent use of this argument when debating evolution - there no fossils have ever been found where dinosaurs appear in the wrong order, predictions of what and where intermediate forms would be found were correct, etc...
Science and Big Questions
The main weakness of Lewens' book comes in the second section where he tries to discuss what science means in terms of answering big questions. The more scientifically well-defined of the big questions are have a quite good discussion (eg. altruism), but the less well-defined questions such as free will really fall apart.
Lewen's discussion of altruism is quite good, covering both evolutionary theory (Dawkins' selfish-genes) and experimental psychology/economics results which give hints of psychological/cultural factors playing a role; for instance, that studying economics is correlated with selfish behaviour.
The second big question is human nature. Lewen's main thesis in this section is that human nature is a superstition/myth which has no scientific meaning and should be discarded. This is technically true, human nature is not a concept used by modern science, but the reasons for this may be due to political hijacking of the term (science doesn't like it when the media/politicians bastardize our terminology) rather than the actual abandonment of the concept. In modern science, human nature has been replaced by the concept of "normal". The main difference between these two term is that 'human nature' is a singular discrete entity, whereas 'normal' is a broad distribution describing a population. Lewen's completely ignores the existence of 'normal' in science which invalidates many of his arguments. For instance he argues that species are defined as a group of organisms descended from a common ancestor and who they can mate with rather than based on their natures because the nature of a species is a meaningless idea, but that fails to acknowledge cases such as coyotes & wolves which frequently interbreed successfully but are considered separate species due to physical & behavioural differences which result in them breeding with their own kind most of the time.
Lastly Lewen's attempts to discuss free will which I consider a complete failure. Lewen's faces the same problem many rationalist non-scientists have: they want a way to define free will without using religious ideas of the soul or any sort of ghost in the machine but he still wants it to be relatively universal among humans but exclusive of almost all other organisms. The problem is of course that without a ghost in the machine the human brain is a slave to material causality and random Brownian motion, As a result the definition of free-will is limited to the degree of sensitivity to its environment an organism exhibits. However, plants are exceedingly sensitive to their environments they adjust their growth in response to thousands of chemicals, light, gravity, the presence of related plants, and potentially other signals we have yet to discover, yet I don't think anyone has suggested plants have free-will just yet.
Free-will usually implies some sort of centralized decision-making/cognition. Lewen's uses the example of himself 'rationally' weighting the facts before buying a car, which is obviously a decision no other organism is faced with so is effective it making it seem like this cognitive processing is unique to humans. But if we recast this as a decision more compatible with non-technological animals such as deciding what to have for dinner it becomes far less clear that humans are unique. People can go out to a restaurant to have dinner or they can go home and prepare dinner from food in their refridgerator. Several factors are considered when making this decision, is there any food in the fridge which will go off soon? is the restaurant likely to be open & have free tables? what is the traffic/parking going to be like near the restaurant? Is there enough food in the fridge for the rest of the week? etc... Many animals from chipmunks & squirrels to jays and shikes cache food for later and have to make similar decisions and consider similar factors, do they go out foraging or do they eat food from their cache. Any they must weigh similar issues, foraging risks exposure to predators and poor weather and the likelihood of finding different foods depends on the time of year, but caches are limited and could spoil or be robbed. Rather than specific examples of decisions we could generalize the idea and say that free-will requires an organism to formulate a plan and then carry it out. But anyone who watched the BBC series "The Hunt" will have to acknowledge that many predators from a lowly spider to the mighty polar bear seems to make plans on what they are going to hunt and how they are going to catch it.
He digs himself further into a whole by insisting that only rarefied experts should have any say on what freewill is despite freewill like consciousness being a completely subjective experience (which he uses to discredit all studies which attempt to determine what 'most people' consider as freedom). Hence why free-will is so rarely discussed/examined by scientists. If there is no objective way to measure/observe something science has almost nothing definitive to say about it. However, most scientists have quietly accepted that humans are slaves to the mechanistic & stochastic universe which means in the grand scheme of things free-will is an illusion produced as a by-product of our decision-making cognitive systems.
What is science
The first parts of the book are truly excellent, by far the best philosophy of science book I've read. The discussion on the popularity & limitations of the Popperian concept of falsifiability in science is nearly perfect. His discussion of Kuhn's paradigms is arguably better than Kuhn's own book. By that I mean his re-interpretation of Kuhn's ideas, which I'm unconvinced is actually what Kuhn believed having read The Structure of Scientific Revolutions myself, is more relevant to actual science & scientists.
The one major flaw of this section comes near the end of this section when Lewens deal with the veracity of scientific theories (while discussing the No Miracles argument). He like all other philosophers of science I have read (which is admittedly rather few) misses the importance of predictions or more generally evidence-after-the-fact. By which I mean the scientific evidence which supports a particular scientific theory that is gathered/observed after the theory has been postulated rather than before. The key here is that scientific theories do not appear at random from the empty space between a scientists ears, rather theories are inspired by existing scientific observations. It is inevitable that any proposed scientific theory will be supported by a large amount of existing scientific observations, thus there is a degree of circular reasoning to using existing scientific observations to "prove" a scientific theory is true. This may be less evident to outsiders because of the unspoken (but ubiquitous) scientific tradition of re-arranging the actual investigative process when preparing a paper for publication.
The format of scientific papers is the begin with background information which inspired the current study, in reality this section is usually written last and with the benefit of 20:20 hindsight. Rather many scientific projects begin with a relatively vague question/topic and early results are used as inspiration for hypotheses to test further. These further results are used to refine the hypothesis even more which is used to gather more results. Eventually this process converges on a relatively specific hypothesis/question which the accumulated results support/answer.
However, evidence-after-the-fact is free from this circularity thus making it much stronger evidence of the veracity of any scientific claim. This is why reproducibility (evidence-after-the-fact for one specific claim) is such an important measure of the quality of scientific work, and hence why the "crisis of reproducibility" is such a big deal. This is why the No Miracles argument is so popular among scientists; as demonstrated by Bill Nye's frequent use of this argument when debating evolution - there no fossils have ever been found where dinosaurs appear in the wrong order, predictions of what and where intermediate forms would be found were correct, etc...
Science and Big Questions
The main weakness of Lewens' book comes in the second section where he tries to discuss what science means in terms of answering big questions. The more scientifically well-defined of the big questions are have a quite good discussion (eg. altruism), but the less well-defined questions such as free will really fall apart.
Lewen's discussion of altruism is quite good, covering both evolutionary theory (Dawkins' selfish-genes) and experimental psychology/economics results which give hints of psychological/cultural factors playing a role; for instance, that studying economics is correlated with selfish behaviour.
The second big question is human nature. Lewen's main thesis in this section is that human nature is a superstition/myth which has no scientific meaning and should be discarded. This is technically true, human nature is not a concept used by modern science, but the reasons for this may be due to political hijacking of the term (science doesn't like it when the media/politicians bastardize our terminology) rather than the actual abandonment of the concept. In modern science, human nature has been replaced by the concept of "normal". The main difference between these two term is that 'human nature' is a singular discrete entity, whereas 'normal' is a broad distribution describing a population. Lewen's completely ignores the existence of 'normal' in science which invalidates many of his arguments. For instance he argues that species are defined as a group of organisms descended from a common ancestor and who they can mate with rather than based on their natures because the nature of a species is a meaningless idea, but that fails to acknowledge cases such as coyotes & wolves which frequently interbreed successfully but are considered separate species due to physical & behavioural differences which result in them breeding with their own kind most of the time.
Lastly Lewen's attempts to discuss free will which I consider a complete failure. Lewen's faces the same problem many rationalist non-scientists have: they want a way to define free will without using religious ideas of the soul or any sort of ghost in the machine but he still wants it to be relatively universal among humans but exclusive of almost all other organisms. The problem is of course that without a ghost in the machine the human brain is a slave to material causality and random Brownian motion, As a result the definition of free-will is limited to the degree of sensitivity to its environment an organism exhibits. However, plants are exceedingly sensitive to their environments they adjust their growth in response to thousands of chemicals, light, gravity, the presence of related plants, and potentially other signals we have yet to discover, yet I don't think anyone has suggested plants have free-will just yet.
Free-will usually implies some sort of centralized decision-making/cognition. Lewen's uses the example of himself 'rationally' weighting the facts before buying a car, which is obviously a decision no other organism is faced with so is effective it making it seem like this cognitive processing is unique to humans. But if we recast this as a decision more compatible with non-technological animals such as deciding what to have for dinner it becomes far less clear that humans are unique. People can go out to a restaurant to have dinner or they can go home and prepare dinner from food in their refridgerator. Several factors are considered when making this decision, is there any food in the fridge which will go off soon? is the restaurant likely to be open & have free tables? what is the traffic/parking going to be like near the restaurant? Is there enough food in the fridge for the rest of the week? etc... Many animals from chipmunks & squirrels to jays and shikes cache food for later and have to make similar decisions and consider similar factors, do they go out foraging or do they eat food from their cache. Any they must weigh similar issues, foraging risks exposure to predators and poor weather and the likelihood of finding different foods depends on the time of year, but caches are limited and could spoil or be robbed. Rather than specific examples of decisions we could generalize the idea and say that free-will requires an organism to formulate a plan and then carry it out. But anyone who watched the BBC series "The Hunt" will have to acknowledge that many predators from a lowly spider to the mighty polar bear seems to make plans on what they are going to hunt and how they are going to catch it.
He digs himself further into a whole by insisting that only rarefied experts should have any say on what freewill is despite freewill like consciousness being a completely subjective experience (which he uses to discredit all studies which attempt to determine what 'most people' consider as freedom). Hence why free-will is so rarely discussed/examined by scientists. If there is no objective way to measure/observe something science has almost nothing definitive to say about it. However, most scientists have quietly accepted that humans are slaves to the mechanistic & stochastic universe which means in the grand scheme of things free-will is an illusion produced as a by-product of our decision-making cognitive systems.
Friday, November 13, 2015
Anti-Outrage Outrage
This week Nature published an article discussing the relationship of social media with sexism in science. But the most interesting part was actually an aside looking at the social media controversy about the ESA guy wearing a shirt covered in a comic-book styled women. The key things to note is that the actual "storm" kicked off after he publicly apologized and that only ~20% of people using the #shirtstom and #shirtgate hash-tags were women! In contrast #scishirt hashtag which was a positive response to the controversy was 67% women. All of which suggests at least 50% (probably more given the long delay compared to the popularity of the broadcast) of the outrage was actually anti-outrage outrage, ie people getting outraged at the fact that some women were offended by the shirt.
Similarly, there have been many recent articles complaining that trigger warnings are destroying university education, preventing people with psychological issues from getting better, or turning students into coddled arseholes. This despite the fact that in all the examples these articles cite of students asking for trigger warnings are individual students (and in some cases dozens of students opposed the complaint). That's a handful of students across the roughly 20 million currently attending higher education in the US who are making unreasonable requests for trigger warnings. Yet there are dozens of people amongst my extended social network who are outraged about this "outrage".
Most recently is the Starbucks plain red Christmas cup, where a single video of someone complaining about it has sparked a massive anti-outrage outrage which has even extended to a notable late-night talkshow host despite the fact that almost nobody is actually upset about it. Yet again the anti-outrage outrage is by far outstripping the actual outrage.
So in that light I hope this blog can start yet another twitter storm/social media anti-anti-outrage outrage, so that the cycle can continue until everyone is completely incensed at all times about nothing at all.
Similarly, there have been many recent articles complaining that trigger warnings are destroying university education, preventing people with psychological issues from getting better, or turning students into coddled arseholes. This despite the fact that in all the examples these articles cite of students asking for trigger warnings are individual students (and in some cases dozens of students opposed the complaint). That's a handful of students across the roughly 20 million currently attending higher education in the US who are making unreasonable requests for trigger warnings. Yet there are dozens of people amongst my extended social network who are outraged about this "outrage".
Most recently is the Starbucks plain red Christmas cup, where a single video of someone complaining about it has sparked a massive anti-outrage outrage which has even extended to a notable late-night talkshow host despite the fact that almost nobody is actually upset about it. Yet again the anti-outrage outrage is by far outstripping the actual outrage.
So in that light I hope this blog can start yet another twitter storm/social media anti-anti-outrage outrage, so that the cycle can continue until everyone is completely incensed at all times about nothing at all.
Sunday, April 5, 2015
Why Mining Asteroids is the worst idea since sliced bread.
Asteroid mining is economically unfeasible and likely to remain so for at least another 50 years (possibly forever, as reclaiming rare metals from road-dust and recycling electronics is far more practical). There are three main costs to any kind of mining:
(1) exploration/resource discovery, and ore vein mapping: This will require identifying a target and landing something on the asteroid/comet a project very similar to that undertaken by the Rosetta (Philae) project at the cost of $1.8 billion USD.
(2) machinery & fuel for extraction : This has completely unknown costs since current technologies all rely on the presence of oxygen to burn fossil fuels powering drills, and setting off explosives. That's just for getting the ore into transportable chunks. However often ore will be processed to increase the concentration of the mineral of interest before being transported long distances to cut down on transportation costs (known as smelting). Current smelting techniques typically require high temperatures and/or liquid water, neither of which are readily available on asteroids.
(3) transportation of extracted material : This would require something like a shuttle to protect the minerals from burning up in the atmosphere on the way down. A close analogy is NASA's Shuttle program which cost ~$450 million per trip, which given the current price of gold would require 12,000kg of 24 carat gold per trip just to break even. The payload capacity of the shuttle was only 22,700 kg which doesn't leave much margin for impurities or extraction/exploration costs.
This means it is a vanity project for all the many super-rich investing in the various companies 'planning' to engage in asteroid mining. It is a disgusting display of wealth which highlights the vast inequalities of our modern society. Notable 'investors' in asteroid mining are Larry Page and Eric Smidt of Google, and some former execs from Goldman Sachs: people whose fortunes have been amassed by invading our privacy and selling that information to advertisers, and who probably contributed to crashing the global economy. While historical aristocrats invested their money building vast houses befitting of the title 'estates' or on extravagant monuments, the current generation prefer throwing it down science/technology pipe dreams. However, unlike the Great Pyramids of Egypt or Europe's many castles there will be nothing substantial left behind for people to gawk at or to be re-purposed into hotels or conference centres when these companies inevitably fail. Thus, these sci-tech vanity projects are more similar to the roaring parties of the 1920s, perhaps justifying the recently invented term 'nerdgasm', than the extravagant constructions of the past.
Unfortunately the amount of investment in these projects is not public knowledge so I can't calculate the number of lives that could be saved if the same amount were to be invested in malaria fighting bed-nets ($10 a piece) or vaccines to eliminate polio (<$1 billion/year globally) or providing maternal health care or paying for anti-retroviral drugs or building toilets or providing a source of fresh water or funding schools etc... But no, unlike the Gates', the 'cool' techno-aristocrats prefer to flush their money down a giant space toilet.
(1) exploration/resource discovery, and ore vein mapping: This will require identifying a target and landing something on the asteroid/comet a project very similar to that undertaken by the Rosetta (Philae) project at the cost of $1.8 billion USD.
(2) machinery & fuel for extraction : This has completely unknown costs since current technologies all rely on the presence of oxygen to burn fossil fuels powering drills, and setting off explosives. That's just for getting the ore into transportable chunks. However often ore will be processed to increase the concentration of the mineral of interest before being transported long distances to cut down on transportation costs (known as smelting). Current smelting techniques typically require high temperatures and/or liquid water, neither of which are readily available on asteroids.
(3) transportation of extracted material : This would require something like a shuttle to protect the minerals from burning up in the atmosphere on the way down. A close analogy is NASA's Shuttle program which cost ~$450 million per trip, which given the current price of gold would require 12,000kg of 24 carat gold per trip just to break even. The payload capacity of the shuttle was only 22,700 kg which doesn't leave much margin for impurities or extraction/exploration costs.
This means it is a vanity project for all the many super-rich investing in the various companies 'planning' to engage in asteroid mining. It is a disgusting display of wealth which highlights the vast inequalities of our modern society. Notable 'investors' in asteroid mining are Larry Page and Eric Smidt of Google, and some former execs from Goldman Sachs: people whose fortunes have been amassed by invading our privacy and selling that information to advertisers, and who probably contributed to crashing the global economy. While historical aristocrats invested their money building vast houses befitting of the title 'estates' or on extravagant monuments, the current generation prefer throwing it down science/technology pipe dreams. However, unlike the Great Pyramids of Egypt or Europe's many castles there will be nothing substantial left behind for people to gawk at or to be re-purposed into hotels or conference centres when these companies inevitably fail. Thus, these sci-tech vanity projects are more similar to the roaring parties of the 1920s, perhaps justifying the recently invented term 'nerdgasm', than the extravagant constructions of the past.
Unfortunately the amount of investment in these projects is not public knowledge so I can't calculate the number of lives that could be saved if the same amount were to be invested in malaria fighting bed-nets ($10 a piece) or vaccines to eliminate polio (<$1 billion/year globally) or providing maternal health care or paying for anti-retroviral drugs or building toilets or providing a source of fresh water or funding schools etc... But no, unlike the Gates', the 'cool' techno-aristocrats prefer to flush their money down a giant space toilet.
Subscribe to:
Posts (Atom)