• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Smartest person you kn(e)w

  • Thread starter Thread starter smcder
  • Start date Start date

Free episodes:

It's possible to be happy for the wellness of others without having to feel sympathy for their misfortunes ( which is the definition of compassion ), and people can be motivated by what makes them happy, not just by doing something to avoid feeling unhappy. Also, sympathy for the misfortunes of others isn't the only thing that tells us the difference between what is right or wrong. Hypothetically, it should be possible to be completely dispassionate and still do the right things based on needs for survival, health, education, standard of living, etc.

It seems to me that the largest portion of suffering and misfortune could be alleviated with material solutions, and to be clear about that I include things like schools, doctors, counsellors, and such in that category because even though education and counselling in the purest sense aren't "material", we still receive those benefits by way of having the material facilities and people in place. So yes I would be happier having the extra intelligence points required to boost me into a position to provide those things on a wider scale than to feel compassionate while not having the means to do nearly as much about it.

I don't need compassion to know that mass-murder is wrong. I don't think any other reasonably intelligent person does either.

I tend to agree ... and the hypothetical is far fetched ... but how do we distinguish this far fetched hypothetical case from a real life one with many similarities:

Hiroshima / Nagasaki where persons of great intelligence developed a weapon and encouraged it's use for the killing of a great number of non-combatants?

I'm ashamed to say I don't know much about the actual bombing and decisions around it's use. I believe Einstein wrote a letter to the President supporting the use of atomic weapons and Oppenheimer later concluded it was the right thing to do.

At any rate it appears to me something like your reasoning was used:

"Hypothetically, it should be possible to be completely dispassionate and still do the right things based on needs for survival, health,education, standard of living, etc."

Certainly many of your criteria above apply and arguably bringing a swift end to the war spared lives.

Was this in fact a dispassionate decision?

What role did compassion play?

Do we use the calculus of war to conclude this was not an act of mass murder and therefore not wrong?

Or is mass killing simply wrong under any conditions?
 
Mass killing is wrong under any condition. Any experience of another person, or people, that requires killing them represents a failure of humanity. Because we collect, we inevitably believe that our way is the best way and worth killing for. Because we don't believe in each other and we believe in theories and ideologies instead we rationalize the act of killing millions of others in the name of our way of life, our god, our ideologies, our beliefs. So long as we keep doing this we see a failure in maturing as a thinking and feeling species.

But didn't Pinker say that we're becoming less violent over time? So then perhaps we are maturing, becoming less violent, less suicidal (unless you're in that young adult age bracket), more global in our thinking and collective concerns. Our outlooks are becoming more regional, continental, and starting to think a little more like China with 50-100 year plans and beyond. When we look this far I wonder if our thinking more about the planet's well being in order to promote the well being of the human collective will help us to eliminate needless violence, as any person whose life ends by violence is a life lost.
 
I tend to agree ... and the hypothetical is far fetched ... but how do we distinguish this far fetched hypothetical case from a real life one with many similarities:

Hiroshima / Nagasaki where persons of great intelligence developed a weapon and encouraged it's use for the killing of a great number of non-combatants?

I'm ashamed to say I don't know much about the actual bombing and decisions around it's use. I believe Einstein wrote a letter to the President supporting the use of atomic weapons and Oppenheimer later concluded it was the right thing to do.

At any rate it appears to me something like your reasoning was used:

"Hypothetically, it should be possible to be completely dispassionate and still do the right things based on needs for survival, health,education, standard of living, etc."

Certainly many of your criteria above apply and arguably bringing a swift end to the war spared lives.

Was this in fact a dispassionate decision?

What role did compassion play?

Do we use the calculus of war to conclude this was not an act of mass murder and therefore not wrong?

Or is mass killing simply wrong under any conditions?
The discussion has moved away from what my personal decision would be if I had the choice to trade some compassion for intelligence, so that it would be easier for me to figure out ways to help the less fortunate without compromising my own material situation, to global issues like World Wars and weapons of mass destruction. I don't see how the two situations can be fairly compared. But I would be willing to say that things like survival, health, education, standard of living, etc. all seem to be polar opposite to either nuking or being nuked.

So I suspect that if everyone were given enough intelligence points to make it easier for them to help out the less fortunate, we would find that we could accomplish more by doing away with wars and violence and channel those resources into making the world a better place. I don't think being smarter and less compassionate necessarily makes someone a psychopath. It seems to me that psychopathy is more about being self serving at other people's expense and by any means, rather than cooperating as a whole for a common good.


You mentioned psychopaths in an earlier post. For those who have no problem playing videos, here's a TED Talk of how it manifests itself in the workplace ( BTW: I experienced this type of thing in more than one job I've had ).

 
Last edited:
The discussion has moved away from what my personal decision would be if I had the choice to trade some compassion for intelligence, so that it would be easier for me to figure out ways to help the less fortunate without compromising my own material situation, to global issues like World Wars and weapons of mass destruction. I don't see how the two situations can be fairly compared. But I would be willing to say that things like survival, health, education, standard of living, etc. all seem to be polar opposite to either nuking or being nuked.

So I suspect that if everyone were given enough intelligence points to make it easier for them to help out the less fortunate, we would find that we could accomplish more by doing away with wars and violence and channel those resources into making the world a better place. I don't think being smarter and less compassionate necessarily makes someone a psychopath. It seems to me that psychopathy is more about being self serving at other people's expense and by any means, rather than cooperating as a whole for a common good.


You mentioned psychopaths in an earlier post. For those who have no problem playing videos, here's a TED Talk of how it manifests itself in the workplace ( BTW: I experienced this type of thing in more than one job I've had ).


The discussion has moved away from what my personal decision would be if I had the choice to trade some compassion for intelligence, so that it would be easier for me to figure out ways to help the less fortunate without compromising my own material situation, to global issues like World Wars and weapons of mass destruction.

isn't it amazing how one thing can lead to another ... ? ;-)

Hypotheticals can do that - move from a specific to a general rule ... I still think Hiroshima is an interesting example - very intelligent persons involved ... did Einstein have to write a letter of encouragement? Did any of those involved visit hospitals in Japan after the bombing? As I said Oppenheimer first opposed and then later validated his decision. I have a good discussion of it in Shattuck's Forbidden Knowledge.

Anyway ... we can move the discussion back to your situation if you like. I think it's an interesting answer - but without compassion would you be motivated to help the less fortunate? As compassion decreased and intelligence increased ... might you be motivated to finish the process, maximize intelligence and eliminate compassion? You're already moving in that direction anyway, right?

No I didn't think psychopathy has anything to do with you. I had thought we discussed it earlier but it may have been another thread. I took the self report and was very honest with how my inner dialogue went which is why I think I had such a split - I do have most of the primary traits but fewer of the social and of course I don't act those out ... probably not valid for many reasons but I did score high on psychopathy on a professional administered test ... so it's interesting to think about considering the kind of work I got involved in. The psychiatrist at the time said I should be good at either crime or business but he was wrong.

I was just curious if anyone else took it.
 
Mass killing is wrong under any condition. Any experience of another person, or people, that requires killing them represents a failure of humanity. Because we collect, we inevitably believe that our way is the best way and worth killing for. Because we don't believe in each other and we believe in theories and ideologies instead we rationalize the act of killing millions of others in the name of our way of life, our god, our ideologies, our beliefs. So long as we keep doing this we see a failure in maturing as a thinking and feeling species.

But didn't Pinker say that we're becoming less violent over time? So then perhaps we are maturing, becoming less violent, less suicidal (unless you're in that young adult age bracket), more global in our thinking and collective concerns. Our outlooks are becoming more regional, continental, and starting to think a little more like China with 50-100 year plans and beyond. When we look this far I wonder if our thinking more about the planet's well being in order to promote the well being of the human collective will help us to eliminate needless violence, as any person whose life ends by violence is a life lost.

So you would not have bombed Hiroshima/Nagasaki under the rationale of ending the war and saving (on balance) more lives? I don't say that is what would have happened - but that's what the rationale was. I think the letter Einstein wrote is available - will try to find it.

Pinker's argument ... yeah, I'm not so sure - would be an interesting one to look at - but I don't think it's because we are maturing. The homicide rate in the US has gone down because ER services have improved for example ... but I think Pinker argues violent acts of any kind ... will try to find more on that too.
 
Mass killing is wrong under any condition. Any experience of another person, or people, that requires killing them represents a failure of humanity. Because we collect, we inevitably believe that our way is the best way and worth killing for. Because we don't believe in each other and we believe in theories and ideologies instead we rationalize the act of killing millions of others in the name of our way of life, our god, our ideologies, our beliefs. So long as we keep doing this we see a failure in maturing as a thinking and feeling species.

But didn't Pinker say that we're becoming less violent over time? So then perhaps we are maturing, becoming less violent, less suicidal (unless you're in that young adult age bracket), more global in our thinking and collective concerns. Our outlooks are becoming more regional, continental, and starting to think a little more like China with 50-100 year plans and beyond. When we look this far I wonder if our thinking more about the planet's well being in order to promote the well being of the human collective will help us to eliminate needless violence, as any person whose life ends by violence is a life lost.

I would never kill anyone for a lower case god.

I think middle aged males may now be the leading demographic for suicides.

Also, I guess I'm not sure we can "mature" as a species ... and even as individuals, we grow and change and adapt ... but we can go backwards as well as forward - maturity even changing according to social standards ... but fairly sure as a species we retain all of our capacities for good and evil, not progressively growing up and less violent ... unless there is genetic engineering or other modifications.

And that is a question I never got answered I think on one of the Transhuman threads ...

If we are going Transhuman should we engineer into the species maturity, compassion other virtues? - ie more humanity into humanity ... I've not seen that narrative in a sci fi novel ... increases in intelligence and physical abilities but not traits like empathy, etc ... ?

Here's the Pinkerpedia article

The Better Angels of Our Nature - Wikipedia, the free encyclopedia


The Better Angels of Our Nature - Wikipedia, the free encyclopedia

"In his review of the book in Scientific American,[35] psychologist Robert Epstein criticizes Pinker's use of relative violent death rates — that is, of violent deaths per capita — as an appropriate metric for assessing the emergence of humanity's "better angels"; instead, Epstein believes that the correct metric is the absolute number of deaths at a given time. (Pinker strongly contests this point; throughout his book, he argues that we can understand the impact of a given number of violent deaths only relative to the total population size of the society in which they occur, and that since the population of the planet has increased by orders of magnitude over history, higher absolute numbers of violent death are certain to occur even if the average individual is far less likely to encounter violence directly in their own lives, as he argues is the case.) Epstein also accuses Pinker of an over-reliance on historical data, and argues that he has fallen prey to confirmation bias, leading him to focus on evidence that supports his thesis while ignoring research that does not."

and

"In an extensive review of Jared Diamond's The World Until Yesterday, Anthropologist James C. Scott also mentions and attacks Pinker's The Better Angels of Our Nature. Summarizing the conclusions of both books as follows, "we know, on the basis of certain contemporary hunter-gatherers, that our ancestors were violent and homicidal and that they have only recently (very recently in Pinker’s account) been pacified and civilised by the state. Life without the state is nasty, brutish and short." Scott argues, in attacking Diamond and Pinker alike, that "it does not follow that the state, by curtailing ‘private’ violence, reduces the total amount of violence." Also believing that Pinker and Diamond are representing Hobbesian views on the formation of states, "Hobbes’s fable at least has nominally equal contractants agreeing to establish a sovereign for their mutual safety. That is hard to reconcile with the fact that all ancient states without exception were slave states." He also goes to describe various methods stateless societies used in curtailing violence and resolving feuds, and addresses a range of claims made by Diamond which similarly appear in Pinker's work.[52]"
 
I appreciate the humor in your quip, but I'm not buying that paper's interpretation of how the operative notion of 'memes' that has run wild in popular culture was the fault of the general reading public's misinterpretation of Dawkins, Dennett, Hofstaedter, or Blackmore rather than the fault of the authors themselves. Dawkins was unclear in the first place and the rest of them took 'memes' to mean what they wanted them to mean. The author of that paper turns himself inside out to avoid stepping on the toes of the named individuals responsible for the confusion, likely because one or more of them might do him some practical good at some point in his career. An example of what's wrong with academia.

Quip? I didn't think I quipped ... ? What was the quip?
 
Ultimately it is a question of the planetary ecology -- how much life it can support. Our species (sitting at the controls of what happens) has had the 'brains' but not the sense to work together to reduce human birth rates. That in itself is a manifestly reasonable solution to our overpopulation problem, and should long ago have been addressed and enforced -- except that our warring tribal power structures cannot agree to work together to bring it about through the single global agency we have, the radically limited UN.



Indeed: the Holocaust in Germany, and the one visited on the native population of North America (among similar outrages). That doesn't make your proposal more reasonable or acceptable (though you go on to elaborate details that seem to be more 'humane'). I don't personally want to debate those proposals, and I don't know how they could be applied responsibly: who/what body of medical judges would identify and justify each human obliteration?. To me only controlling the birth rate is acceptable, and it could be done. But speaking of the survivors of holocausts and genocides, in what sense do you mean 'survival'? Merely the continued existence of the survivors? As if those survivors didn't continue to live with broken spirits and scalded hearts? Have you talked with any of these survivors? I have, especially Jewish people who can't speak about what was done to their parents, wives, husbands, children without breaking down 30, 40, 50 years later, who live hidden away in their apartments fearful of the world beyond the door, unable to engage it, permanently disheartened by outrage and grief and a terrible sense of helplessness..



That's an interesting idea about the Neanderthals that I've seen expressed elsewhere. Continuing genetic traces of Neanderthal DNA in ours could account for some of the extreme variations in sensitivity we see in humans today. I do think our species is in general more emotionally calloused these days, which I think derives as much from the dominant current interpretation of what we are (mainly dominant in the West) as from the variety of traumas that most people on the planet have passed through over time. Merleau-Ponty used this metaphor -- the fish is in the water and the water is in the fish -- to evoke for his readers the intimate interconnections and interdependence between consciousness and the world in which it exists. The metaphor also works if we extend it to the compromised health and vigor of organisms in polluted environments today. The more polluted the world becomes, the more damaged we become, both physically and spiritually. It's a vicious circle that needs to be remediated from both ends of the spectrum of subjectivity and objective conditions within which we live and find reasons to want to live, or not.

Yes - when I lived in Germany I spoke to many survivors and visited Dachau - one of two places I've been that have had a palpable sense of evil.

I've seen various figures but some percentage did survive psychologically healthy - Viktor Frankl may have discussed this and I have another excellent book on person's who helped during the Holocaust, I'll check both sources ... then there would be a bell curve I think and yes extreme trauma. I'm just saying that people have done what they needed to do to survive and then also survived the grief.

And no, I'm not looking to debate the proposals - the purpose of a hypothetical is to tease out beliefs ... so @ufology said that we could do the right thing without compassion according to pragmatic rules - like survival (if I read that right?) but then he said mass killing was wrong - so I was trying to come up with a scenario, a forced choice between 6 billion deaths or 7 billion deaths and extinction? Which would be chosen according to his morality? It's not meant to be realistic because we likely wouldn't have that kind of certainty ...

but

the Hiroshima incident is pretty close: very intelligent people (Oppenheimer, Groves, The President) made a pragmatic choice to inflict horrific death and injury (for generations) on a mostly civilian population on the rationale that it would end the war sooner and save in aggregate more lives (I dont know if those calculations were ever done or could reasonably be done) and Oppenheimer decided unto the end of his life it was the right thing to do ... but my point is on @ufology's moral calculus - such an act wouldn't be "wrong" because in the case of the hypothetical 1 billion people, humanity would be saved ... its a forced choice to see what you really believe. But if you say mass murder is wrong, then the 7 billion die and humanity is extinct.

So yes I very much appreciate your sense that it is flat out wrong and we spend our waning days contemplating where we went wrong. But I don't think most people would agree ... in practice and that's based on historical events like Hiroshima and other survival situations where civilized persons turned to murder and cannibalism for survival.
 
So am I right to put

@ufology
@Constance
@Burnt State

down for letting humanity go instinct (or take the near certain risk of extinction) over taking any action (other than population control) to reduce the population (even in a humane manner) by 6 billion ... i.e. prematurely ending 6 billion lives?

Choosing 7 billion over 6 billion to die?

I don't know what I would do in this situation ... based on situations where what I hoped I would do and what I did didn't match up ... but if I had to be involved in some way with bringing about the 6 billion deaths ... even in the face of overwhelming evidence that doing so would save 1 billion lives, would save humanity ... I might participate in something like the "humane" virus above ... but wow ... I'm not sure I'd want to live after it was all over, maybe I would contribute by volunteering for the first round of virus to rule out "side effects".
 
Last edited by a moderator:
Steve wrote: "In looking for the review of On Having No Head in Hofstadter and Dennet's The Mind's I ... I found this:

An Error Occurred Setting Your User Cookie"

Quip? I didn't think I quipped ... ? What was the quip?

Based on a reading of the paper you linked, I thought you were playing with the idea of a computer glitch to account for the woefully inadequate development of what began with Dawkins's 'meme' in later 'scholarship' and interpretation traced in that paper. The author of that paper would have been relieved if it all could have resulted from a computer glitch rather than from the theory-laden misreadings of what Dawkins has said he intended to say. But in that case he wouldn't have been able to write his paper, which no doubt was a labor to write and is consequently a labor to read and offers no deep critique of the uses and misuses of the term 'meme'.
 
Steve wrote: "In looking for the review of On Having No Head in Hofstadter and Dennet's The Mind's I ... I found this:

An Error Occurred Setting Your User Cookie"



Based on a reading of the paper you linked, I thought you were playing with the idea of a computer glitch to account for the woefully inadequate development of what began with Dawkins's 'meme' in later 'scholarship' and interpretation traced in that paper. The author of that paper would have been relieved if it all could have resulted from a computer glitch rather than from the theory-laden misreadings of what Dawkins has said he intended to say. But in that case he wouldn't have been able to write his paper, which no doubt was a labor to write and is consequently a labor to read and offers no deep critique of the uses and misuses of the term 'meme'.

oh ok ... no it looks like there was a computer glitch ... bad link ... but you found the paper?
 
If we are going Transhuman should we engineer into the species maturity, compassion other virtues? - ie more humanity into humanity ...

How do you, or the transhumanists in general, suppose that emotional capacities and behaviors evolved over millions of years could in fact be enhanced and improved in a machine? Read Frans de Waal:



Also, I guess I'm not sure we can "mature" as a species ... and even as individuals, we grow and change and adapt ... but we can go backwards as well as forward - maturity even changing according to social standards ... but fairly sure as a species we retain all of our capacities for good and evil, not progressively growing up and less violent ... unless there is genetic engineering or other modifications....

We're still evolving, and I see many signs of increasing empathy with and concern for both humans and other animals these days.

Re 'genetic engineering', we don't know enough to mess with our or other species' genes. Besides which it's now recognized in biological sciences that much more than genes is involved in our and other animal's evolution.

There's a chance that the source of many of our own contradictions and dysfunctional behaviors are the result of a superior species' altering our own genetics in the past.
 
So am I right to put

@ufology
@Constance
@Burnt State

down for letting humanity go instinct (or take the near certain risk of extinction) over taking any action (other than population control) to reduce the population (even in a humane manner) by 6 billion ... i.e. prematurely ending 6 billion lives?

Choosing 7 billion over 6 billion to die?

I don't know what I would do in this situation ... based on situations where what I hoped I would do and what I did didn't match up ... but if I had to be involved in some way with bringing about the 6 billion deaths ... even in the face of overwhelming evidence that doing so would save 1 billion lives, would save humanity ... I might participate in something like the "humane" virus above ... but wow ... I'm not sure I'd want to live after it was all over, maybe I would contribute by volunteering for the first round of virus to rule out "side effects".

In the nature of things, our species will die out (or destroy itself before it dies out). What is it about our species that leads you to feel its preservation is worth the sacrifice of six billion individuals and all the moral, ethical, and emotional consequences of that?
 
How do you, or the transhumanists in general, suppose that emotional capacities and behaviors evolved over millions of years could in fact be enhanced and improved in a machine? Read Frans de Waal:





We're still evolving, and I see many signs of increasing empathy with and concern for both humans and other animals these days.

Re 'genetic engineering', we don't know enough to mess with our or other species' genes. Besides which it's now recognized in biological sciences that much more than genes is involved in our and other animal's evolution.

There's a chance that the source of many of our own contradictions and dysfunctional behaviors are the result of a superior species' altering our own genetics in the past.

I'm not ... and I don't ... but I'm hoping a TxHer will! @ufology

Me personally I'm an OEM guy ... remember? To the point of not even wanting something removed that no longer works ... but if no one steps up, I'll play devil's advocate (as if "HE" needs one ...)
 
In the nature of things, our species will die out (or destroy itself before it dies out). What is it about our species that leads you to feel its preservation is worth the sacrifice of six billion individuals and all the moral, ethical, and emotional consequences of that?

Excellent point ...
 
In the nature of things, our species will die out (or destroy itself before it dies out). What is it about our species that leads you to feel its preservation is worth the sacrifice of six billion individuals and all the moral, ethical, and emotional consequences of that?

There is a bit of a paradox there ... I think ... do you see it too?

Also, Failing to preserve 1 billion people, it could be argued (says Mephistopheles) would also have moral ethical and emotional consequences.
 
How do you, or the transhumanists in general, suppose that emotional capacities and behaviors evolved over millions of years could in fact be enhanced and improved in a machine? Read Frans de Waal:





We're still evolving, and I see many signs of increasing empathy with and concern for both humans and other animals these days.

Re 'genetic engineering', we don't know enough to mess with our or other species' genes. Besides which it's now recognized in biological sciences that much more than genes is involved in our and other animal's evolution.

There's a chance that the source of many of our own contradictions and dysfunctional behaviors are the result of a superior species' altering our own genetics in the past.

I hope so ... As I said I'm cynical ... but perhaps there is a direction there an evolving "toward" as we've explored - iust a bit - in the idea of teleology. Certainly hope itself could be something to evolve toward on a species level and for me personally.

Gotta meditate on that one!
 
Back
Top