SIVYER PSYCHOLOGY

View Original

COGNITIVE

OUTLINE OF THE COGNITIVE APPROACH

The cognitive approach: the study of internal mental processes, the role of schema, the use of theoretical and computer models to explain and make inferences about mental processes. The emergence of cognitive neuroscience.

The Cognitive Revolution began in the mid-1950s when researchers in several fields began to develop theories of mind based on complex representations and computational analogies (Miller, 1956; Broadbent, 1958; Chomsky, 1959; Newell, Shaw, & Simon, 1958)

Cognitive science arose out of dissatisfaction with the behavioural approach, namely behaviourism’s reluctance to acknowledge the “black box” or, more specifically, the internal processes that make up the higher cognitive processes.

In short, cognitive psychologists believe that “what goes on inside the mind,” is integral to understanding human behaviour.

See this content in the original post

THEORETICAL MODELS

Cognitive science is primarily concerned with creating theoretical models that simulate mental behaviour in a way that resembles the biological mind, this is because it is believed that internal thought processes can, in principle, be fully revealed by identifying the individual components of mental functions; for example, working memory can be broken down into the components of the episodic buffer, central executive, and so on. Thus, it is now understood that people have a finite capacity for working memory.

Theoretical models will always be based on four things:

  • What is the system?

  • What are the components and modules in a system?

  • How does the system work/perform?

  • How do those modules and systems communicate with other systems?

The multi-store model can be used to illustrate the above points: The system is memory. SM, STM, and LTM are the components. The performance: SM must pay attention so that STM can rehearse and so on. Finally, how these memory systems interact with language or visual-spatial awareness would be an example of how a system might communicate with other systems.

Cognitive psychologists investigate all inner mental abilities and, as a result, have subdomains in perception, attention, learning, memory, the processing of spoken and written language, thinking, reasoning, belief formation and consciousness (Coltheart, 2002).

Cognitive psychology compares the individual person to a processor of information in much the same way that a computer stores and retrieves information and follows a program to produce an output, and so do humans. Integral to this approach, therefore, is the “mind as computermetaphor.

RESEARCH METHODS USED IN COGNITIVE PSYCHOLOGY 

Given that cognitive psychologists study the “Blackbox”, the next logical question is how do they study the box the behaviourists said wouldn’t open? In other words, how do they explore the holy of holies, thought itself?

Cognitive psychology, in its modern form, incorporates a remarkable set of new technologies in psychological science. It uses four research domains to research the brain:

  • Experimental cognitive psychology

  • Cognitive computer science

  • Cognitive neuropsychology:

  • Cognitive neuroscience

EXPERIMENTAL COGNITIVE PSYCHOLOGY

The experimental method was the most popular method used to conduct research before the emergence of cognitive neuroscience. Cognitive psychology originated in the 1950s when technology was not at its most insightful.

With this method, the researchers initially formulate a theoretical model of a mental function, say how perception is organised into schemata. Experimental cognitive psychologists use reasoning to develop their theories, but unlike the philosophers before them, they do not end their conclusions there.

Theoretical models are tested experimentally and usually conducted in a controlled environment, which lends itself to scientific data studies. Only when results are analysed does a cognitive psychologist accept or reject his ideas?

Most cognitive psychologists seek to develop and test such theories through experiments with neurotypical people who are skilled performers in the relevant cognitive subdomain. Examples of experimental cognitive psychology include Baddeley and Hitch, who created a theoretical model of short-term- memory, and Loftus, who tried to create a cognitive model of memory. Also, Miller, Peterson and Peterson, Bartlet, Piaget, Loftus, Bartlet, etc.

Evaluation: Issues with many forms of external validity, e.g., mundane realism, and ecological validity. But also population validity, e.g., imposed etic, and ethnocentrism. Moreover, experiments don’t show where the physical-biological regions in the brain are.

COGNITIVE COMPUTER SCIENCE:

The computation of the human mind and behaviour, and the links between natural and artificial intelligence. Computational models are closely akin to a branch of computer science called artificial intelligence. Programmers build computational models to represent human cognitive processes.

Evaluation: It’s a systematic way of investigating mental processes, but it cannot be regarded as a representation of the exact cognitive mechanisms involved,

COGNITIVE NEUROPSYCHOLOGY:

Like experimental cognitive neuropsychology, cognitive neuropsychology is a branch of cognitive psychology that aims to understand how the structure and function of the brain relate to specific psychological processes. In addition, cognitive neuropsychology places a particular emphasis on studying the cognitive effects of brain injury or neurological illness with a view to inferring models of normal cognitive functioning.

In other words, cognitive neuropsychology studies people with brain damage so it can make deductions about how the neurotypical brain works. Examples of cognitive neuropsychology include: KF, HM, Phineus Gage, Tan (Paul Broca), Clive Wearing, Charles Whitman plus many, many more

Evaluation: Individuals may have a unique brain organisation so case studies are not generalisable. This method of study has been very useful in the past when technology was not available, e.g., FMRI and PET scans.   Not great at deciphering higher cognitive functions such as reasoning and decision-making or where cognitive functions and processes do overlap. Real-time activity in the brain cannot be measured.

COGNITIVE NEUROSCIENCE:

Cognitive neuroscience is the scientific field that is concerned with the study of the biological processes and aspects that underlie cognition, with a specific focus on the neural connections in the brain which are involved in mental processes. Basically, it is the biology behind cognitive psychology. This part of the approach uses modern technology in brain scanning to establish where and when cognitive processes happen within the brain. Methods employed in cognitive neuroscience include experimental procedures from psychophysics and cognitive psychologyfunctional neuroimagingelectrophysiologycognitive genomics, and behavioural genetics.

Evaluation: Scans can have poor spatial resolution (picture quality ) and/or temporal resolution (you get the images a long time after the behaviour ). Scans can not provide pictures of very complex behaviours and can be uncomfortable for the participant.

CRITICAL EVALUATION OF THE COGNITIVE APPROACH

Contemporary cognitive neuroscience has increasingly looked towards the brain to explain the inner workings of the human mind; this is most evident with the rise of cognitive neuroscience’s use of neuro-imaging. This method has, however, been met with mixed reactions. On the one hand, classical cognitive scientists—in the experimental and computational traditions—have argued that cognitive neuroimaging does not, and cannot, answer questions about the cognitive mechanisms that are responsible for creating intelligent behaviour. Neuro-imaging does not tell us enough about mental processes, which means that the researcher has to make inferences that may involve subjectivity. For example, Savage-Rumbaugh measured sign language acquisition in a bonobo chimpanzee named Kanzi but in reality, neuro-imaging can't actually reveal what Kanzi or any other “great ape” is actually thinking, it is limited to information about the neural basis of cognition.

Thus, some researchers still see a distinction between the two approaches, e.g., that cognitive psychology is more focused on information processing and behaviour and cognitive neuroscience studies the underlying biology of information processing and behaviour. But others argue …. “that cognitive neuroscience is the integration of at least four different areas which emphasise mental, not brain, functions. In other words, it is not as concerned with biology as, say, neuroscience is. It is concerned with creating computer models to simulate mental behaviour in a way that resembles the biological mind..” Richard Ivry, of UC Berkeley.

TO DEMONSTRATE THE DIFFERENCE BETWEEN COGNITIVE NEUROSCIENCE AND NEUROSCIENCE

NEUROSCIENCE USE SCANS TO INVESTIGATE BIOLOGICAL FUNCTIONS

The four disciplines emerged as technology advanced better ways of investigating the brain. But all the research domains of cognitive science have exactly the same objective, with the bottom line being that they all want to discover the same underlying principles. The only difference between these domains is how they investigate their theories.

STEVEN PINKER IS A CANADIAN COGNITIVE PSYCHOLOGIST & SCIENCE AUTHOR

Many criticisms of cognitive psychology tend to focus on the techniques used by experimental cognitive psychologists, e.g., laboratory studies with contrived tasks that have little to do with everyday behaviour in their natural settings. Such studies tend to use methods to test phenomena that people would be unlikely to face in real life, such as learning digital sequences to test long-term memory. Issues of external validity, such as lack of mundane realism and ecological validity, therefore, are common.

But criticising cognitive psychology for the practices of one of their subdomains is like "chucking the baby out with the bathwater", an idiomatic expression for an avoidable error in which something good is eliminated when trying to get rid of something bad as ultimately, cognitive psychology uses many other research methods to investigate their theories, such as cognitive neuroscience, cognitive neurology and cognitive computer science.  Methods such as these use a variety of rigorous procedures to collect and evaluate evidence. This is a major strength of this approach as it triangulates and supports the criticism of artificiality so often levied at experimental cognitive psychology. All the research domains have limitations (see above), but when you combine their findings, the conclusions become very robust. For example, Baddeley’s dual-task experiment was supported by clinical case studies of brain-damaged people like HM and KF and also supported by PET scans.

Overall, research from the cognitive sciences is very well regarded by the scientific community as it is empirically sound.

Other evaluations devalue the analogy between the human mind and the operations of a computer (inputs and outputs, storage systems, the use of a central processor) because they believe there is a huge difference between information processing, which occurs with machines and organic biological structures such as the human mind. For example, the human mind is more prone to errors, forgetting or “retrieving” incorrect information from memories, which computers do not do. Also, humans are affected by emotions hormones and life events. Therefore, basing cognitive processes on computer-based understanding lacks validity as it may not be a true fit for how the human mind actually works.

The Cognitive Approach may be considered reductionist, as it does not provide a full explanation of human behaviour; it ignores some biological and social contributors, like emotions and parenting. For example, Baron-Cohen's research into autism ignored other issues such as hormone levels.

Cognitive psychology sits on the soft determinist fence and, therefore, represents a middle ground between hard determinism and free will. It views people as having a choice that is not totally constrained by external or internal factors. For example, the cognitive approach acknowledges that all actions have causes as actions are confined by the limits of a biological system; for example, dogs can’t talk no matter how much you submerge them in languages and humans can’t perceive ultraviolet light. Thus, cognitive psychologists accept that humans are limited in making some choices because of the way they are designed to process information. However, cognitive psychologists believe there is some conscious mental control over behaviour.

On the plus side, this view is quite refreshing because apart from humanism, all the other psychological approaches are hard determinism which is quite a depressing and fatalistic view of life. At least a soft determinist view recognises that individuals still have a part to play in causality and can be blamed or praised. For their behaviour. On the negative side, hard determinists criticise cognitive psychologists for failing to realise the extent to which freedom is limited. They also point out that it is difficult to draw a line on what is and isn’t a determining factor in human choices. Moreover, is it determinism if it allows freedom?

B.F. Skinner criticised the cognitive approach as he believed that only external stimulus-response behaviour could be scientifically measured. More importantly, he thought the mediation processes between stimulus and response, e.g., the black box, did not exist and were thus irrelevant. Paradoxically, recent research from the field of cognitive neuroscience has shown that the human brain makes up its mind up to ten seconds before an individual is cognisant of a decision. The neuroimaging of participants making decisions revealed that researchers could predict what choice people would make before the subjects were even aware of having made a decision. So perhaps Skinner and the other early behaviourists were right, after all, and maybe the blackbox isn’t that relevant.

It should be noted, that this finding does not mean that it is still not important to study mental processes, it just means that consciousness may not be as valuable as the early cognitive psychologists thought.

But in summing up, the cognitive approach is probably the most dominant in psychology today. It has been applied to a wide range of practical and theoretical contexts that can be applied in various areas of psychology. For example, the cognitive approach has helped us better understand how people form impressions of others via schemata, and how cognitive biases such as heuristics and cognitive dissonance can distort our thinking.

When applying the cognitive approach to psychopathology, it has also highlighted that dysfunctional behaviour might be due to faulty or irrational thought processes and how cognitive behavioural therapy (CBT) has been very effective for treating these dysfunctional thoughts, e.g., depression and anxiety (Hollon & Beck, 1994).

The cognitive approach has also helped improve childcare, education and eyewitness testimony. For example, in Loftus and Palmer's study of memory, findings suggested that police officers need to avoid using leading questions when interviewing witnesses, as this may alter the memory of the witnesses.

Lastly, Cognitive science integrates with many other approaches and areas of study to produce, for example, social learning theory, evolutionary psychology, neuroscience and artificial intelligence (AI).


COGNITIVE BIASES

GENERALISATIONS

A bias is a disproportionate weight in favour of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, or a belief. In science and engineering, a bias is a systematic error.

 Stereotype: a widely held but fixed and oversimplified image or idea of a particular type of person or thing, "the stereotype of the woman as the carer".

 Schema (singular) schemata (plural): In psychology and cognitive science, a schema describes a pattern of thought or behaviour that organizes categories of information and the relationships among them. A schema is a workflow or storyboard that tells you what to do in a recurring situation. For example, when you go shopping, you might follow an internal script to make shopping more efficient: you pick the same products in the same sequence every time. You know in advance what you will buy; there are no decisions involved; in fact, you ignore all alternative products. A schema allows you to navigate familiar, recurrent, and similar situations by following the same sequence of actions.

 A heuristic is a mental shortcut that our brains use that allows us to make decisions quickly without having all the relevant information. They can be thought of as rules of thumb that allow us to make a decision that has a high probability of being correct without having to think everything through.

 A heuristic consists of preferences that help you decide when you do not have enough information or do not care enough to make an informed decision. For example, when you want to buy yogurt but are no nutritionist, you might decide on which yogurt you buy by the familiarity of the brand name (you prefer the familiar; this is called the familiarity heuristic) and other aspects that have nothing to do with the yogurt itself.

Certainly! Heuristics are mental shortcuts or rules of thumb that individuals use to simplify decision-making and problem-solving. There are several types of heuristics, and they can be categorized into different groups. Here are some common types:

  1. Availability Heuristic: People tend to rely on information that is readily available or easily recalled from memory. If something comes to mind easily, it is perceived as more common or important.

  2. Representativeness Heuristic: This heuristic involves making judgments based on how well an object or situation matches a prototype. It categorises things based on how similar they are to a typical example.

  3. Anchoring and Adjustment Heuristic: It involves starting with an initial estimate (the anchor) and then adjusting it based on additional information. The initial anchor can significantly influence the final judgment.

  4. Simulation Heuristic: People often assess the likelihood of events based on how easily they can imagine or mentally simulate the outcome. If an event is easy to picture, it may be perceived as more likely.

  5. Substitution Heuristic: When faced with a difficult question, individuals may substitute an easier question and answer that instead. This allows for quicker decision-making, even if the substituted question is irrelevant.

  6. Affect Heuristic: Emotional responses or feelings can influence decision-making. People may rely on their emotional reactions to judge the desirability or riskiness of an option.

  7. Base Rate Heuristic: This involves using general information about the likelihood of an event occurring (base rate) and adjusting it based on specific information about the case at hand.

  8. Hindsight Bias: After an event has occurred, people tend to perceive the outcome as having been more predictable than it actually was. This bias can influence the way decisions are evaluated retrospectively.

  9. Confirmation Bias: This is not exactly a heuristic but is a related cognitive bias. It involves the tendency to search for, interpret, and remember information in a way that confirms one's preexisting beliefs.

These heuristics can be useful for making quick decisions, but they can also lead to errors and biases in judgment. It's important to be aware of them and consider them critically in decision-making processes.

A LIST OF COGNITIVE BIASES

Unconscious bias: An unconscious bias (also called Implicit bias) refers to attitudes and beliefs that occur outside of our conscious awareness and control.

 Identity politics: the tendency for people of a particular religion, race, social background, etc., to form exclusive political alliances, moving away from traditional broad-based party politics.

Cancel culture or a call-out culture is a modern form of ostracism in which someone is thrust out of social or professional circles – whether it be online, on social media, or in person. Those subject to this ostracism are said to have been "cancelled”

Affect Heuristic: Why we tend to rely upon our current emotions when making quick, automatic decisions.

 Ambiguity Effect: Why we prefer options that are known to us

Anchoring Bias: Why we tend to rely heavily upon the first piece of information we receive

Attentional Bias: Why do our perceptions change based on what we commonly think about?

Availability Heuristic: Why we tend to think that things that happened recently are more likely to happen again.

Bandwagon Effect: Why do people prefer popular options?

Base Rate Fallacy: Why we rely on event-specific information over statistics.

Bottom-Dollar Effect: Why we are more likely to be dissatisfied with that product than we otherwise would be.

Bounded Rationality: Why our decisions may not be optimal and are limited by time, information, and mental capacity.

 Bundling Bias: Why we are more likely to consume our purchases over a bundle purchase. Why are we more likely to consume our individual purchases over a bundle purchase?

Bye-Now Effect: Why, when we read homophones, are we still primed by the other meaning of the word?

Cashless Effect: Why we're willing to pay more when no cash is involved in a transaction.Why we're willing to pay more when no cash is involved in a transaction.

Category Size Bias: Why do we perceive an option as more likely if it belongs to a large category than if it is in a small category?

Choice Overload Bias: Why we tend to have difficulty making choices when faced with many options.

Cognitive Dissonance: Why we tend to prefer having consistency in our beliefs (cognitions).

 Commitment Bias: Why do people support their past ideas, even when presented with evidence that they're wrong?

 Confirmation Bias: Why we interpret information favouring our existing beliefs.

 Decision Fatigue: Why making many decisions reduces our ability to make rational ones.

Declinism: Why we feel the past is better compared to what the future holds

 Decoy Effect: Why a third, inferior option can change how we decide between two similar-valued options.

 Default Bias: Why we generally prefer to keep situations as they currently are

 Distinction Bias: Why we tend to view two options as more distinctive when evaluating them simultaneously than separately.

 Dunning–Kruger Effect: Why incompetent people fail to recognize their incompetency.

 Empathy Gap: Why we tend to mispredict behaviours based on our emotional state.

 Endowment Effect: Why we value our possessions more highly than the possessions of others.

 Extrinsic Incentive Bias: Why do people think that extrinsic incentives are more motivating for others?

 Extrinsic Incentive Bias: Why do people think that extrinsic incentives are more motivating for others?

 Forer Effect: Why we tend to accept any personality feedback as true, regardless of validity.

 Framing Effect: Why option presentation changes our decision-making.

 Functional Fixedness: Why it's difficult to see any way to use objects besides their standard use.

 Fundamental Attribution Error: Why we favoUr our judgment regardless of situational influences when explaining other's behaviour.

Gambler’s Fallacy: Why we tend to think an event is less likely to happen again if it happened several times in the past

Google Effect: Why we seem to forget information easily

Halo Effect: Why First Impressions Last

Hard–easy Effect: Why do we tend to overestimate our ability to do something considered hard and underestimate our ability to do something considered easy?

Hindsight Bias: Why we tend to see events as being more predictable than they truly were after the event occurred.

Hot-hand Fallacy: Why we experience a series of positive outcomes for a random event and tend to predict that future outcomes will also be positive.

Hyperbolic Discounting: Why we value immediate rewards more than long-term rewards.

Identifiable Victim Effect: Why we act to help specific groups even though others need help as well

IKEA effect: Why do we place a disproportionately high value on things we helped to create?

The illusion of Control: Why we often overestimate our ability to influence events.

The illusion of Validity: Why are we overconfident in our predictions?

Illusory Correlation: Why we assume a correlation between two variables

Illusory Truth Effect: Why we are more likely to believe that something is true if it is repeated to us enough times.

In-group Bias is the reason we tend to give favourable treatment to people who belong to the same groups as we do.

Incentivization: Why we tend to gravitate to working towards perks

Just-world Hypothesis: Why do we believe that all morally good actions add up to a reward and all morally bad actions add up to a punishment

Lag Effect: Why short lessons may not be a good ideas

Law of the Instrument: Why we tend to apply a skill/tool anywhere

Less-is-better Effect: Why we prefer the smaller or the lesser alternative

Levelling and Sharpening: Why we lose certain details of our memory and why we remember some

 Levels-of-processing Effect: Why repetition improve memory retention

 Look-elsewhere Effect: When a statistically significant observation should be overlooked.

 Loss Aversion: Why is the pain of losing felt twice as powerfully compared to equivalent gains?

 Mental Accounting: Why we treat one’s money differently

 Mere Exposure Effect: Why does being exposed to something or someone make us view that thing or person more positively

Motivating-Uncertainty Effect: Why rewards of uncertain size tend to motivate us more than known rewards

 Naive Allocation: Why we tend to prefer spreading limited resources evenly across options.

Negativity Bias: Why negative occurrences tend to have a greater impact on our psychological state than positive ones

Noble Edge Effect: Why we tend to favour brands that show care for societal issues

Nostalgia Effect: How our view of a rosy past can dictate our actions now

Observer-expectancy Effect: Why observers tend to unconsciously influence the behaviours of those they observe.

Omission Bias: Why we tend to react more strongly to harmful actions

Optimism Bias: Why people tend to underestimate their chance of experiencing adverse effects.

Ostrich EffectWhy we avoid dangerous or negative information

Peak-end Rule: Why we tend to judge an experience on how we felt at its peaks and its end

Pessimism bias: Why do some people overestimate the probability of negative events occurring

Planning Fallacy: Why we tend to underestimate the time we will need to complete a task when planning for it

Primacy Effect: Why is the first presented to us will be more easily recalled than the rest?

Priming Effect: Why unrelated stimuli can cause us to make different decisions than we normally would

Projection Bias: Why do we think our current preferences will remain the same in the future?

Reactive Devaluation: Why we often tend to devalue proposals made by people who we consider to be adversaries

Regret Aversion: Why we choose the option that minimises regret even if it's not optimal

Response Bias: Why responses to a survey or experiment can be inaccurate due to the nature of the survey or experiment

Restraint Bias: Why we tend to overestimate our control over impulsive behaviours

Rosy Retrospection: Why do we only remember the positive elements of our past:

Salience Bias: Why do we focus on more prominent things and ignore those that are less so?

Self-serving Bias: Why people overestimate their contribution to any outcome

Sexual Overperception Bias: Why men sense sexual interest when there is none and miss sexual interest when there is

Social Norms: Why do we try to act similarly to communities we are in

Source Confusion: Why we forget where our memories come from, and thereby lose our ability to distinguish the reality or likelihood of each memory.

Spacing Effect: Why information learned repeatedly is better retained when learned farther apart.

Spotlight Effect: Why is there a tendency to feel like all eyes are looking at us

Suggestibility: Why our memories can be changed by the suggestions of others

Survivorship Bias: Why we misjudge groups by only looking at specific group members

Telescoping Effect: Why recent events are thought to be longer ago than they were, and remote events thought to be more recent

Zero Risk Bias: Why do we prefer to eliminate one category of risk, even if doing so increases the overall ris


FALLACIES

A comprehensive list of “Logical Fallacies” You Should Know and How to Spot Them

 Fallacies are common errors in reasoning that will undermine the logic of your argument. Fallacies can be either illegitimate arguments or irrelevant points and are often identified because they lack evidence that supports their claim. Avoid these common fallacies in your own arguments and watch for them in the arguments of others.

  1. The Slippery Slope fallacy/The Domino fallacy)

  2. The Hasty Generalisation fallacy/Over-generalisation fallacy

  3. The Slothful Induction Fallacy/Appeal to coincidence fallacy

  4. The Faulty Causality fallacy /The Correlation/Causation Fallacy

  5. The Anecdotal Evidence Fallacy (the fallacy of drawing on limited personal experience)

  6. The Post Hoc Ergo Propter hoc fallacy/Post hoc fallacy

  7. The Genetic Fallacy/The Appeal to Authority fallacy)

  8. The Begging the Question fallacy /petitio principii

  9. The Circular Argument fallacy (Circular reasoning is when you attempt to make an argument by beginning with an assumption that what you are trying to prove is already true).

  10. The Appeal to Ignorance fallacy

  11. The False Dilemma fallacy/ The Either /Or fallacy

  12. The Ad Hominem fallacy /Attacks a person or a person’s background, instead of the person’s ideas.

  13. The Ad Populum fallacy /The Bandwagon Appeal fallacy/ (Latin for "appeal to the people").

  14. The Red Herring fallacy/ Diverts attention from the issue fallacy

  15. The Straw Man fallacy (when your opponent over-simplifies or misrepresents your argument)

  16. The Moral Equivalence fallacy

  17. The Alphabet Soup fallacy.

  18. The Texas Sharpshooter Fallacy (cherry-picking data)

19.   The Middle Ground Fallacy

20.  The Burden of Proof Fallacy

  1. The Sentimental Appeals fallacy

  2. The Dogmatism fallacy

  3. The Scare Tactics fallacy

  4. The Appeals to False Authority

  5. The Equivocation fallacy/ Confuses naming with explaining the fallacy

  6. The Non Sequitur fallacy

  7. The Faulty Analogy fallacy

  8. The Appeals to Popularity fallacy /Inappropriate appeals to common opinion fallacy:

29.  The Wishful Thinking fallacy

30.  The Appeals to False Authority fallacy

31.   The Perfect Solution fallacy 

32.  The Glittering Generality fallacy name-calling in reverse)

Logical Fallacies

The Slippery Slope fallacy/The Domino Theory)

This is a conclusion based on the premise that if A happens, then eventually through a series of small steps, through B, C, X, Y, Z will happen, too, basically equating A and Z. So, if we don't want Z to occur, A must not be allowed to occur either. Example: If we ban four by fours because they are bad for the environment eventually the government will ban all cars, so we should not ban four by fours.

In this example, the author is equating banning four by four cars with banning all cars, which is not the same thing.

The Hasty Generalisation fallacy/Over-generalisation fallacy

This is a conclusion based on insufficient or biased evidence. In other words, you are rushing to a conclusion before you have all the relevant facts. Example: Even though it's only the first day, I can tell this is going to be a boring course.

In this example, the author is basing his evaluation of the entire course on only the first day, which is notoriously boring and full of housekeeping tasks for most courses. To make a fair and reasonable evaluation the author must attend not one but several classes, and possibly even examine the textbook, talk to the professor, or talk to others who have previously finished the course in order to have sufficient evidence to base a conclusion on.

 The Slothful Induction Fallacy/Appeal to coincidence fallacy

Slothful induction is the exact inverse of the hasty generalization fallacy above. This fallacy occurs when sufficient logical evidence strongly indicates a particular conclusion is true, but someone fails to acknowledge it, instead attributing the outcome to coincidence or something unrelated entirely. Example: Even though every project Brad has managed in the last two years has runway behind schedule, I still think we can chalk it up to unfortunate circumstances, not his project management skills.

The Faulty Causality fallacy /The Correlation/Causation Fallacy

If two things appear to be correlated, this doesn't necessarily indicate that one of those things irrefutably caused the other thing. This might seem like an obvious fallacy to spot, but it can be challenging to catch in practice -- particularly when you really want to find a correlation between two points of data to prove your point.

Example: Our blog views were down in April. We also changed the colour of our blog header in April. This means that changing the colour of the blog header led to less views in April.

 In place of logical evidence, this fallacy substitutes examples from someone's personal experience. Arguments that rely heavily on anecdotal evidence tend to overlook the fact that one (possibly isolated) example can't stand alone as definitive proof of a greater premise. Example: One of our clients doubled their conversions after changing all their landing page text to bright red. Therefore, changing all text to red is a proven way to double conversions.

 The Post hoc ergo propter hoc fallacy/Post hoc fallacy   

This is a conclusion that assumes that if 'A' occurred after 'B' then 'B' must have caused 'A.' Example: I drank bottled water and now I am sick, so the water must have made me sick. In this example, the author assumes that if one event chronologically follows another the first event must have caused the second. But the illness could have been caused by the burrito the night before, a flu bug that had been working on the body for days, or a chemical spill across campus. There is no reason, without more evidence, to assume the water caused the person to be sick.

 The Genetic Fallacy/The Appeal to Authority fallacy)

This conclusion is based on an argument that the origins of a person, idea, institute, or theory determine its character, nature, or worth. Example: The Volkswagen Beetle is an evil car because it was originally designed by Hitler's army.

In this example, the author is equating the character of a car with the character of the people who built the car. However, the two are not inherently related.

 The Begging the Question fallacy /petitio principii

The conclusion that the writer should prove is validated within the claim. Example: Filthy and polluting coal should be banned. Arguing that coal pollutes the earth and thus should be banned would be logical. But the very conclusion that should be proved, that coal causes enough pollution to warrant banning its use, is already assumed in the claim by referring to it as "filthy and polluting."

 The Circular Argument fallacy

Circular reasoning is when you attempt to make an argument by beginning with an assumption that what you are trying to prove is already true).

This restates the argument rather than actually proving it. Example: George Bush is a good communicator because he speaks effectively. In this example, the conclusion that Bush is a "good communicator" and the evidence used to prove it "he speaks effectively" are basically the same idea. Specific evidence such as using everyday language, breaking down complex problems, or illustrating his points with humorous stories would be needed to prove either half of the sentence.

 The Appeal to Ignorance is a fallacy based on the assumption that a statement must be true if it cannot be proven false — or false if it cannot be proven true. Also known as argumentum ad ignorantiam and the argument from ignorance.

 Either/or The Faulty Dilemma fallacy

This is a conclusion that oversimplifies the argument by reducing it to only two sides or choices. Example: We can either stop using cars or destroy the earth. In this example, the two choices are presented as the only options, yet the author ignores a range of choices in between such as developing cleaner technology, car-sharing systems for necessities and emergencies, or better community planning to discourage daily driving.

 The Ad Hominem fallacy

Attacks a person or a person’s background, instead of the person’s ideas.

This is an attack on the character of a person rather than his or her opinions or arguments. Example: Green Peace's strategies aren't effective because they are all dirty, lazy hippies. In this example, the author doesn't even name particular strategies Green Peace has suggested, much less evaluate those strategies on their merits. Instead, the author attacks the characters of the individuals in the group.

 The Ad Populum fallacy /The Bandwagon Appeal fallacy/ (Latin for "appeal to the people").

This is an appeal that presents what most people or a group of people think, in order to persuade one to think the same way. Getting on the bandwagon is one such instance of an ad populum appeal. Example: If you were a true American you would support the rights of people to choose whatever vehicle they want.

In this example, the author equates being a "true American," a concept that people want to be associated with, particularly in a time of war, with allowing people to buy any vehicle they want even though there is no inherent connection between the two.

 The Red Herring fallacy

Diverts attention from the issue.: This is a diversionary tactic that avoids the key issues, often by avoiding opposing arguments rather than addressing them. Example: The level of mercury in seafood may be unsafe, but what will fishers do to support their families? In this example, the author switches the discussion away from the safety of the food and talks instead about an economic issue, the livelihood of those catching fish. While one issue may affect the other it does not mean we should ignore possible safety issues because of possible economic consequences to a few individuals.

 The Straw Man fallacy

when your opponent over-simplifies or misrepresents your argument)

This move oversimplifies an opponent's viewpoint and then attacks that hollow argument.

People who don't support the proposed state minimum wage increase hate the poor. In this example, the author attributes the worst possible motive to an opponent's position. In reality, however, the opposition probably has more complex and sympathetic arguments to support their point. By not addressing those arguments, the author is not treating the opposition with respect or refuting their position.

 The Moral Equivalence fallacy 

This fallacy compares minor misdeeds with major atrocities, suggesting that both are equally immoral. That parking attendant who gave me a ticket is as bad as Hitler. In this example, the author is comparing the relatively harmless actions of a person doing their job with the horrific actions of Hitler. This comparison is unfair and inaccurate.

 The Alphabet Soup fallacy

A corrupt implicit fallacy from ethos in which a person inappropriately overuses acronyms, abbreviations, form numbers and arcane insider “shop talk” or jargon primarily to prove to an audience that s/he “speaks their language” and is “one of them” and to shut out, confuse or impress outsiders

The Texas Sharpshooter Fallacy (cherry-picking data)

This fallacy gets its colourful name from an anecdote about a Texan who fires his gun at a barn wall and then proceeds to paint a target around the closest cluster of bullet holes. He then points at the bullet-riddled target as evidence of his expert marksmanship.

Speakers who rely on the Texas sharpshooter fallacy tend to cherry-pick data clusters based on a predetermined conclusion. Instead of letting a full spectrum of evidence lead them to a logical conclusion, they find patterns and correlations in support of their goals and ignore evidence that contradicts them or suggests the clusters weren't actually statistically significant. Example: Lisa sold her first start-up to an influential tech company, so she must be a successful entrepreneur. (She ignores the fact that four of her start-ups have failed since then.

The Middle Ground Fallacy

This fallacy assumes that a compromise between two extreme conflicting points is always true. Arguments of this style ignore the possibility that one or both of the extremes could be completely true or false -- rendering any form of compromise between the two invalid as well. Example: Lola thinks the best way to improve conversions is to redesign the entire company website, but John is firmly against making any changes to the website. Therefore, the best approach is to redesign some portions of the website.

The Burden of Proof Fallacy

If a person claims that X is true, it is their responsibility to provide evidence in support of that assertion. It is invalid to claim that X is true until someone else can prove that X is not true. Similarly, it is also invalid to claim that X is true because it's impossible to prove that X is false. In other words, just because there is no evidence presented against something, that doesn't automatically make that thing true. Example: Barbara believes the marketing agency's office is haunted since no one has ever proven that it isn't haunted

The Sentimental Appeal fallacy

is a logical fallacy, whereby a debater attempts to win an argument by trying to get an emotional reaction from the opponent and audience. It is generally characterized by the use of loaded language and concepts (God, country, and apple pie being good concepts; drugs and crime being bad ones).

 The Dogmatism fallacy!

This fallacy occurs when one doctrine is pushed, often intensely, as the only acceptable conclusion and that that belief is beyond question. Dogmatists are unwilling to even consider an opposing argument and believe that they are so correct that they can't even examine evidence to the contrary.

 Scare Tactic fallacy:

a strategy using fear to influence the public's reaction; coercing a favourable response by preying upon the audience's fears. Scare tactics are not direct threats but are intimidated conclusions. ... If we base these conclusions on fear, however, then we have committed a logical fallacy.

 Appeal to False Authority fallacy.

Description: Using an alleged authority as evidence in your argument when the authority is not really an authority on the facts relevant to the argument. As the audience, allowing an irrelevant authority to add credibility to the claim being made. Also see the appeal to authority.

 The Fallacy of Equivocation 

occurs when a key term or phrase in an argument is used in an ambiguous way, with one meaning in one portion of the argument and then another meaning in another portion of the argument. Examples: I have the right to watch "The Real World." Therefore, it's right for me to watch the show.

 Non Sequitur fallacy:

Examples. The term non sequitur refers to a conclusion that isn't aligned with previous statements or evidence. ... For example, if someone asks what it's like outside and you reply, "It's 2:00," you've just used a non sequitur or made a statement that does not follow what was being discussed.

 The False Analogy fallacy: Is a type of informal fallacy. It states that since Item A and Item B both have Quality X in common, they must also have Quality Y in common. For example, say Joan and Mary both drive pickup trucks. Since Joan is a teacher, Mary must also be a teacher. This is flawed reasoning!

The Appeals to Popularity fallacy /Inappropriate appeals to common opinion fallacy:

When the claim that most or many people in general or of a particular group accept a belief as true is presented as evidence for the claim. Accepting another person’s belief, or many people’s beliefs, without demanding evidence as to why that person accepts the belief, is lazy thinking and a dangerous way to accept information.

 The Wishful Thinking fallacy  

Describes decision-making and the formation of beliefs based on what might be pleasing to imagine, rather than on evidence, rationality, or reality. ... Some psychologists believe that positive thinking is able to positively influence behaviour and so bring about better results.

 The Appeals to False Authority fallacy

An appeal to false authority is an argument that states that we should listen to the opinion of a false authority figure, who is ramed as a credible authority on the topic being discussed.

 The Perfect Solution fallacy 

The perfect solution fallacy is a related informal fallacy that occurs when an argument assumes that a perfect solution exists or that a solution should be rejected because some part of the problem would still exist after it was implemented.

 The Glittering Generality fallacy (name-calling in reverse)

What is an example of a glittering generality?

Using them has been described as "name-calling in reverse." Examples of words commonly employed as glittering generalities in political discourse include freedom, security, tradition, change, and prosperity