SIVYER PSYCHOLOGY

View Original

KEY FEATURES OF SCIENCE

Philosophy is dead,’ Stephen Hawking once declared, because it ‘has not kept up with modern developments in science, particularly physics.’ It is scientists, not philosophers, who are now ‘the bearers of the torch of discovery in our quest for knowledge’.

Did the universe exist in six days, or has it evolved for approximately 14 billion years? Did a higher power intricately design the universe, or did it originate from the Big Bang? Does a common ancestor link humans and apes, or did a divine act bring forth Adam and Eve?

Proponents of science emphasise its reliability and validity, contrasting it with religious and philosophical explanations rooted in ancient belief systems. While these ancient beliefs still have cultural and historical significance, science offers an advanced perspective.

THE KEY FEATURES OF SCIENCE ARE

  • EMPIRICAL EVIDENCE

  • CAUSE AND EFFECT RELATIONSHIPS -PREDICTABILITY

  • SYSTEMATIC THEORY CONSTRUCTION

  • OBJECTIVITY

  • FALSIFIABILITY

  • PARADIGMS and PARADIGM SHIFTS)

  • NOMOTHETIC APPROACH (THIS IS ALSO DISCUSSED SEPARATELY)

  • QUANTITATIVE DATA

  • CONTROLLED EXPERIMENTATION

  • PEER REVIEW

  • OPERATIONALISATION OF VARIABLES

  • OPENNESS AND TRANSPARENCY

EMPIRICAL EVIDENCE

“Science is only as good as its research tools.”

In today's world, there is extensive knowledge about humanity, and it's almost second nature to consider the brain as central to understanding behaviour. However, this understanding was not always as clear-cut. In earlier times, without the benefit of sophisticated tools such as fMRIs and microscopes, the early scholars had to lean heavily on philosophy. They used reason and logic to make sense of the world and human nature. This lack of ability “to see” led to the formulation of many theories that, while creative, were untested and later disproven. For example, with their advanced civilisation, Ancient Egyptians surprisingly viewed the brain as less significant than the heart. Similarly, during certain periods, women who displayed intelligence risked being unfairly labelled as witches, and mental illnesses were often misattributed to supernatural causes like demonic possession.

SEEING IS BELIEVING

Humanity's collective understanding broadened significantly as methodologies for exploring complex phenomena evolved; this shift from a sole reliance on philosophical reasoning to the incorporation of investigative forms of technology marked a substantial change in the approach to discovering truths about the natural world and the human condition.

John Locke, often heralded as the father of empiricism, championed the principle that 'seeing is believing.'He viewed human reasoning as inherently fallible and posited that for true certainty, scientists needed to engage all their senses to gather data. His perspective, however, extended beyond the limitations of literal sight and argued that using specialised tools such as telescopes or microscopes could uncover truths about the world that the senses could not reveal.

This journey from reasoning to unaided observation and, finally, to increasingly enhanced research tools represents the evolution of knowledge. More importantly, it shows how ideas metamorphosed from magic to philosophy to science.

Here's a chronological overview of some pivotal technologies and techniques:

  • Ablation and Lesion Studies (19th century and earlier): Early neuroscientists would remove or damage parts of the brain in animals to observe the effects on behaviour. This provided the first insights into the localisation of brain functions.

  • Electrical Stimulation (mid-19th century): Pioneers like Luigi Galvani and later Eduard Hitzig and Gustav Fritsch applied electrical currents to the brain, revealing the electrical nature of neural activity and mapping functions to specific regions.

  • Cathode Ray Oscilloscope (20th century): The development of the cathode ray oscilloscope allowed scientists to measure the electrical activity of the brain, leading to the first brain wave recordings.

  • EEG (1920s): Hans Berger recorded the first human electroencephalogram, paving the way for decades of research into brain waves and states of consciousness.

  • Neuroimaging with X-rays and CT Scans (the 1970s): Computerized Tomography (CT) scanning was the first method that allowed us to create visual images of the brain non-invasively, although it was limited to showing brain structure rather than function.

  • MRI (1970s): Magnetic Resonance Imaging gave clearer, more detailed images of the brain's structure without using X-rays.

  • PET Scans (1970s): Positron emission tomography began to be used to view how the brain functions by showing metabolic activity and neurotransmitter interactions.

  • Microscopes: While light microscopes have been around for centuries, advances in microscopy throughout the 20th century, such as electron microscopy, allowed for an unprecedented look at neurons and their connections.

  • fMRI (1990s): Functional Magnetic Resonance Imaging revolutionized the field by allowing scientists to see brain activity and how different brain parts communicate during tasks.

  • Optogenetics (2000s): A breakthrough that combines genetics and optics to control the activity of individual neurons with light.

  • TMS (1985): Transcranial Magnetic Stimulation was developed as a research tool and a therapeutic intervention for various neurological and psychiatric conditions.

  • Genome Sequencing Technologies (2000s): With the completion of the Human Genome Project, researchers could begin to identify genetic factors related to brain function and psychological conditions.

  • Wearable Technology (2010s): The advent of consumer-grade devices that track physiological data has opened new avenues for understanding the day-to-day dynamics of psychological states.

DATA COLLECTION

In these scientific endeavours, the meticulous recording of observations — whether seen, heard, or felt — was paramount. These records, known as data, laid the groundwork for scientific discovery and the continuous building of knowledge, ensuring that each new insight was built upon a reliable and verifiable base.

COUNTER ARGUMENT TO EMPIRICISM

Many psychological elements, such as thoughts and emotions, resist direct observation. Romantic love is one example. It's a deeply internal state that cannot be quantified. For starters, love is inherently subjective, and secondly, people may not always be truthful or self-aware about their feelings. As a result, psychologists often rely on indirect physical responses, like pupil dilation or heart rate, as surrogate measures for love. However, this methodology is fraught with limitations, as these physiological signs do not necessarily correlate directly with the emotion of love. An increased heart rate might signal someone in the throes of love but also indicate a congenital heart condition. Therefore, indirect measures of behaviour, such as IQ tests or galvanic skin response, cannot be seen as concrete evidence of specific psychological states.

Indeed, the assertion that all matter must be observable to be scientifically valid encounters several challenges, especially in modern scientific inquiry. Contemporary scientists often focus on phenomena that are not directly observable. For example, gravity, while not observable in itself, is universally accepted in the scientific community due to its demonstrable effects.

Similarly, in fields like particle physics, entities such as the 'strong force'—one of the fundamental forces in nature responsible for holding atomic nuclei together—are studied despite their indirect observability. The evidence for the strong force is inferred from the behaviour of subatomic particles rather than directly observed.

In psychology, this principle parallels concepts like schemas and mental frameworks that help explain behaviour and cognition. While schemas are not directly observable, they are widely accepted because they effectively interpret and predict various psychological phenomena.

These ideas are perfectly encapsulated in a popular psychology joke, highlighting the essence of empiricism. Imagine two scientists who, after a night of passion, discuss their experience. In the morning, one casually remarks to the other, "It was good for you. How was it for me?" This joke cleverly illustrates the scientist’s focus on observable actions over internal experiences

CAUSE AND EFFECT

Cause and effect is the backbone of the scientific method; it drives everything from hypotheses to conclusions in an experiment. The job of the scientific method is to see if the cause produces an observed effect. Grasping cause-and-effect relationships is crucial for developing effective treatments, shaping sound policies, and guiding further research. Without a clear understanding of causation, conclusions may be based on uncertain or coincidental associations, leading to ineffective or misguided applications in practical and research contexts.

Cause and effect is characterised by a specific event (the cause) leading directly to another event (the effect). This principle is key to discerning whether a direct causative link exists between two phenomena or whether their relationship might be coincidental or influenced by external factors. An example of a cause-and-effect relationship is the impact of sleep deprivation on cognitive performance. Lack of sleep (the cause) has been consistently shown to lead to decreased alertness and impaired memory (the effect). This relationship is well-established through various studies, illustrating how sleep deprivation directly influences and causes changes in cognitive functions rather than this being a mere correlation without a causal link.

Confusing cause and effect with correlation is common because both can appear related in a dataset. Correlation occurs when two variables show a tendency to change together, but this doesn't imply that one variable is causing the change in the other. For instance, Hilda baked a batch of pork pies. Every time she ate one, there was a thunderbolt of lightning. It could concluded that there is a positive correlation between lightning and pork pies, but what can't be said is that the pork pie caused the lightning, at least not without evidence to back it up. Unless scientific proof can be obtained that Hilda's pies are causing some weird weather conditions, it's just a coincidence that these lightning strikes occur when they do. This illustrates that correlation indicates a relationship or pattern between variables but does not establish a cause-and-effect link. Confusion arises because the simultaneous change in correlated variables can give the illusion of a direct causal relationship.

CAUSE AND EFFECT OR COINCIDENCE?

Look at the following examples and decide whether they are correlations or cause-and-effect relationships. By the way, all the following statements are true.

  1. More than 98% of convicted criminals are bread eaters.

  2. 50% of all children who grow up in bread-consuming households score below average on standardised tests

  3. People who own cats have higher incomes than people who own dogs

  4. Most murderers in Scotland have a surname beginning with M

  5. In the 18th century, when virtually all bread was baked in the home, the average life expectancy was less than 50 years; infant mortality rates were unacceptably high; many women died in childbirth; and diseases such as typhoid, yellow fever and influenza ravaged whole nations.

  6. Primitive tribal societies that have no bread exhibit a low occurrence of cancer, Alzheimer's, Parkinson's disease and osteoporosis.

  7. Bread is often a "gateway" food item, leading users to "harder" items such as butter, jam, peanut butter and even ham.

  8. Newborn babies can choke on bread.

  9. It has been shown that children on the autism spectrum watch more television than neurotypical children.

All the examples above are correlations and fall under non-scientific experiments because they do not demonstrate cause-and-effect relationships. They were used to illustrate how correlations can establish statistical relationships between variables, but these connections do not show causality.

The true relationships between them are revealed below:

  1. The suggestion implies a link between criminal behaviour and bread consumption, but since bread is a common dietary staple, its consumption has no causal relationship with criminal activities. To illustrate this spurious connection, bread could be substituted with other common dietary items like milk.

  2. Pointing out that children in bread-consuming households score lower on tests is a classic example of correlation without causation. Bread consumption doesn’t impact academic performance; this is a coincidence rather than a cause and could be substituted with other foodstuffs such as potatoes.

  3. The observed correlation between cat ownership and higher income might reflect lifestyle differences. For instance, dog owners might not be able to spend significant time away from home like cat owners can, influencing their career choices and income levels.

  4. The observation that many murderers in Scotland have surnames beginning with 'M' is coincidental and lacks causal significance, considering the prevalence of 'Mc' and 'Mac' surnames in Scottish culture.

  5. Associating home-baked bread with lower life expectancy in the 18th century overlooks other significant health and mortality factors of that era, such as medical knowledge, hygiene practices, and nutrition.

  6. The lower incidence of certain diseases in tribes not consuming bread likely relates to their overall lifestyle and environment, not the absence of bread. This correlation could be similar to any non-indigenous food item, like caviar or crackers.

  7. The idea of bread being a "gateway" to other food items mimics the concept of gateway drugs. It’s a playful take and not a serious commentary on a cause-and-effect relationship.

  8. While it’s true that newborns can choke on bread, this risk is common to all solid foods for infants, not specifically bread. It’s a general caution about feeding practices rather than an issue unique to bread.

  9. Children on the autism spectrum may tend to watch TV more than neurotypical children, and this can sometimes be because their parents use television as a way to take a break or have some respite.

These examples remind us of the importance of critical thinking when interpreting correlational data and to be wary of drawing causal inferences from mere associations.

COUNTER ARGUMENT TO CAUSE AND EFFECT

According to many scientists, there is a disparity between psychology and the hard sciences. Psychology is often underrated because studying humans doesn’t lend itself well to the scientific process. Scientists conduct experiments involving inanimate elements, a practice they believe is fundamentally distinct from experiments involving human subjects. Unlike inorganic subjects that react consistently under controlled conditions—for instance, two quantities of water will both boil at 100 degrees Celsius regardless of different treatments—humans present a unique challenge. This is primarily because, unlike inorganic variables or many lower-order animals, humans have different personalities and life histories that can confound results,

Consider a hypothetical scenario: three individuals are offered cocaine by an attractive peer in a study on operant conditioning but their reactions diverge significantly—one refuses based on anti-drug education, another experiments once out of curiosity, and the third develops an addiction. Isolating the factors behind these varied outcomes for testing poses a significant challenge for psychologists.

Myriad factors affect behaviour, such as genetics, upbringing, diet, social class, income, ethnicity, culture, sexual orientation, gender, number of siblings, IQ, birth order, age, religion, disability, and family structure. Managing the vast array of variables that shape human individuality is daunting.

For psychology to ascertain cause and effect with scientific rigour, a researcher would have to replicate conditions with multiple sets of identical twins, controlling for all but one variable of interest—a methodology that is both impractical and ethically dubious. As a result, psychologists frequently resort to less definitive research methods like correlations, questionnaires, interviews, and case studies, which paradoxically raise more questions about their scientific validity and reliability.

Moreover, the laws of cause and effect work only reliably for inanimate objects. For example, If one snooker ball strikes with another, the final resting place of each can be predicted with an objective amount of accuracy. After the initial impact, they no longer influence each other. Living systems are another matter.  If a person kicks a dog, it can be determined how far the dog should travel in a certain direction, given its mass. The truth would be a bit different, though- if the person were reckless enough to kick a dog, it might turn around and bite them;

It would be uncertain if the dog’s final resting place will have anything to do with Newton’s Laws of Motion. Human beings are complicated–many things happen instantaneously. One cannot predict exactly what will occur, as one individual's response influences another's communication. The relationship is circular; individuals continuously respond to feedback to determine their subsequent actions. Focusing solely on one aspect of the interaction is like attempting to understand football rules by only observing one team.

Lastly, scientific laws are generalisable, but psychological explanations are often restricted to specific times and places. Because psychology studies people and indirectly studies the effects of social and cultural changes on behaviour. Psychology does not occur in a social vacuum. Behavior changes over time and in different situations. These factors and individual differences make research findings reliable for a limited time only.

SYSTEMATIC THEORY CONSTRUCTION

The primary function of theories is to enhance our comprehension and predictive abilities concerning the world. In essence, a theory represents a compilation of principles that elucidate the observations and gathered facts in psychology. Theory development is an integral component of all models, with the Deductive model involving theory construction at the outset, while the Inductive model encompasses theory construction towards the conclusion.

It is crucial to recognise that the formulation of a theory cannot occur arbitrarily. Rather, it necessitates a foundation rooted in observed and documented facts. These empirical facts serve as the building blocks for constructing a theory. Moreover, it is worth noting that the scientific method, in its strictest sense, is characterised by systematic procedures that prioritize meticulously planned studies over random or unsystematic observations.

Let’s look at an example to illustrate this:

“While chatting with the children, a teacher is concerned that most come to school without eating a healthy breakfast. In her opinion, children who eat a decent breakfast’ learn to read more quickly and are better behaved than children who do not; she now wants to set up a preschool breakfast club for the children so that they can all have this beneficial start to the day. The local authority is unwilling to spend money on this project purely based on the teacher’s opinion and insists on having scientific evidence for the claimed benefits of eating a healthy breakfast.”

The situation above does not exemplify systematic theory construction because it begins with an anecdotal observation and a personal belief rather than a structured gathering of empirical evidence. The teacher noticed a trend and formed a hypothesis based on casual conversations and personal judgments about the children's behaviour and learning progress about their breakfast habits. This is a preliminary observation that may lead to the formation of a hypothesis, but it does not constitute a theory in the scientific sense.

For the teacher's belief to evolve into a systematic theory, it would require a more rigorous process:

  1. Observation: Gathering objective data on breakfast habits and children's learning and behaviour, not just informal chats.

  2. Hypothesis Formation: Based on these observations, formulate a clear hypothesis that can be tested, such as "Eating a healthy breakfast improves children's reading abilities and behaviour”.

  3. Testing: Designing a controlled experiment or study to test this hypothesis, potentially by comparing children who eat a healthy breakfast with a control group who do not.

  4. Data Collection and Analysis: Collect and analyse data to see if there is a statistically significant difference in learning and behaviour between the two groups.

  5. Conclusion: Concluding the data analysis to support or refute the hypothesis.

  6. Theory Construction: If various studies and the underlying mechanisms consistently support the hypothesis are understood, a theory can be constructed

Science can begin from random observation, which is the beginning of inductive reasoning. Isaac Asimov said that the most exciting phrase to hear in science is not "Eureka!" but "That's funny." After the scientist notices something funny, he or she investigates it systematically. In the case of the teacher's scenario, the local authority is correct in asking for scientific evidence. The teacher's observations, while valuable, are subjective and do not meet the rigorous standards of systematic theory construction. They are the starting point for scientific inquiry, not the end. To move from personal opinion to scientific theory, the teacher's hypothesis about the benefits of a healthy breakfast would need to be tested through carefully planned and executed research that collects and analyzes objective data.

OBJECTIVITY

The essence of scientific inquiry lies in objectively examining matter and factual information. In this pursuit, researchers are committed to maintaining a state of impartiality, refraining from imposing personal beliefs or biases that might skew their investigations. Researchers must remain unaffected by personal emotions and experiences, striving to minimize all potential sources of bias and eliminate subjective ideas. The bedrock of scientific inquiry rests upon the reliance on factual data about the world as it truly exists, regardless of whether these facts align with the investigator's initial hopes or desires. To varying degrees of success, scientists endeavour to eliminate their own biases when conducting observations.

THE COUNTER-ARGUMENT TO OBJECTIVITY

However, it is important to acknowledge the counterargument that absolute objectivity can be elusive, especially in psychology, where humans study other humans. The challenge arises from the inherent difficulty of studying human behaviour without introducing biases. Additionally, from a broader philosophy of science perspective, maintaining objectivity can be challenging because one's theoretical standpoint may influence the interpretation of data. For example, if a researcher believes that gender is primarily influenced by nature, they may struggle to remain impartial when confronted with data that contradicts this viewpoint. Furthermore, the shared species identity of the observer and the observed can introduce complexities related to reflectivity. 
REPLICATION SHOULD BE POSSIBLE:

This refers to whether a particular method and finding can be repeated with different/same people and on different occasions to see if the results are similar. If a discovery is reported but cannot be replicated by other scientists, it will not be accepted, e.g., if a type 1 error has occurred. If we get the same results repeatedly under the same conditions, we can be sure of their accuracy beyond a reasonable doubt. This gives us confidence that the results are reliable and can be used to build up a body of knowledge or a theory, which is vital in establishing a scientific theory.

FALSIFIABILITY, DEDUCTIVE REASONING AND THE NULL HYPOTHESIS

NULL AND ALTERNATIVE HYPOTHESES

There are two types of hypotheses: The Alternative (sometimes called the Experimental Hypothesis) and the Null Hypothesis.

FOR EXAMPLE:

Alternative Hypothesis (H1 or HA).: "The consumption of cheese before bedtime is associated with an increase in the frequency and intensity of nightmares in individuals."

In this alternative hypothesis, it is suggested that there is a specific relationship between eating cheese before bedtime and experiencing more frequent and intense nightmares. It proposes a cause-and-effect connection between the two variables.

Null Hypothesis (H0): "There is no significant relationship between the consumption of cheese before bedtime and the frequency or intensity of nightmares in individuals."

On the other hand, the null hypothesis suggests no meaningful connection exists between eating cheese before bedtime and the occurrence or intensity of nightmares. It essentially states that any observed differences in nightmares are due to chance and unrelated to cheese consumption.

POPPER'S INFLUENCE ON NULL AND ALTERNATIVE HYPOTHESES

Contrary to popular belief, in scientific research, the protocol is to reject the null hypothesis, not confirm the alternative hypothesis. Before Popper, the null hypothesis, as it is now commonly understood, did not have a defined place in scientific methodology.

The null hypothesis (HO) is the foundational element in scientific experimentation. It represents the default assumption that there is no effect or difference in what is being studied. It is formulated in a way that can be potentially refuted. When deciding whether your research has worked, the scientific language is to accept or reject the null hypothesis and not to accept or reject the alternative hypothesis. This seemingly inconsequential rule demonstrates that the research in question is truly scientific as it is capable of having a null hypothesis, e.g., being refuted, unlike, for example, pseudoscientific theories like Freud's, which are incapable of being falsified. For instance, it cannot be refuted that a person has unconscious biases.

Another example is the hypothesis that baked beans cause depression, which is easy to disprove or prove. However, if something is theoretically impossible to disprove, such as the existence of ghosts, it may not be possible to formulate a null hypothesis, rendering the research unscientific.

Alternative Hypothesis (H1): Named to emphasise its lesser role and position as an alternative to the null. The alternative hypothesis, proposed by researchers, suggests an effect or a difference in the phenomenon under investigation. It is also formulated for testing against the null hypothesis.

INDUCTIVE VERSUS DEDUCTIVE REASONING

This seeking of confirmation is a traditional understanding of the scientific method of inductive reasoning, which goes back to the ancient Greeks. It relies on the belief that to look at the world with a scientific eye is to observe with no preconceived notions; you look, see what you see, and then develop hypotheses based on those observations.

So, you look at a swan and notice it’s white. You look at another swan; it’s white too. You look at enough white swans, and eventually, you hypothesise that all swans are white. However, according to Popper, all inductive evidence is limited: you don’t always observe the universe and in all places. You do not know there are no black swans. However, observing a single black swan would be sufficient to refute the conclusion that all swans are white.

 The fundamental difference between inductive and deductive reasoning is that inductive reasoning aims to develop a theory, while deductive reasoning aims to test an existing theory. Inductive reasoning moves from specific observations to broad generalisations, and deductive reasoning vice versa. Both approaches are used in various types of research, and it’s not uncommon to combine them in one large study.

KARL POPPER AND DEDUCTIVE REASONING

Karl Popper, a prominent scientist known for introducing the concept of falsification and advocating the scientific method known as deductive reasoning, played a pivotal role in reshaping the landscape of scientific inquiry. Before Popper's contributions, the field of science was largely dominated by the paradigm of inductive reasoning. However, Popper identified limitations in relying solely on inductive reasoning and sought to address these issues.

As mentioned earlier, inductive reasoning, characterised by its open-ended and exploratory nature, involves observing phenomena that scientists deem significant. Science often originates from a seemingly arbitrary observation, marking the commencement of inductive reasoning. The most captivating expression in science is not "aha but rather "That's weird." Once a scientist perceives something intriguing, a systematic investigation ensues and leads to the formulation of theories based on these observations. For example, a scientist observing the motion of celestial bodies and subsequently proposing the theory of planetary orbits is an illustration of inductive reasoning in action.

WHY IS FALSIFICATION SO IMPORTANT?

Establishing the "existence" of something is frequently a straightforward task, as illustrated by the ease of providing evidence for the existence of God. This evidence may be sourced from religious texts, places of worship, personal anecdotes, and spiritual experiences. A similar straightforwardness can be observed in proving the existence of Father Christmas, particularly during the holiday season in December, when sightings of him in department stores and reports of missing mince pies are commonplace. Again, this is how most people commonly regard hypothesis testing, e.g., testing to see if something is true, as in testing the alternative hypothesis.

On the contrary, disproving the non-existence of God or the absence of unconscious biases is impossible.

Popper emphasised that falsifiability, refutability, or testability were essential criteria for a theory's scientific status. He identified the disparity between falsifiable scientific methods and those centred on confirming pre-existing beliefs. To illustrate this point,

Popper explained how psychodynamic theories with the same scenario could have completely different interpretations. For example, Freud attributed a man pushing a child into the water to psychological repression, possibly stemming from an Oedipus complex, while a man sacrificing his life to save a child was seen as achieving sublimation. Adler, however, (another psychodynamic theorist) explained both behaviours as driven by feelings of inferiority. Popper noted that psychodynamic theories could explain virtually any behaviour with retrospective analysis, rendering them less scientifically robust and more disparaging than what Popper deemed pseudo-science.

Popper firmly believed that methods that solely confirmed pre-existing beliefs had the potential to substantiate almost any claim.

Karl Popper's examination of Albert Einstein's work underscored a fundamental distinction in scientific methodology. Unlike theories susceptible to retrospective data fitting, exemplified by Sigmund Freud's approach of interpreting past data to "predict" present behaviour and maintaining theory confirmation, Einstein's theories made forward-looking predictions that could be definitively refuted if observations didn't align with those predictions. Einstein's gravitational theory made risky predictions that could be decisively refuted if observations didn't match the theory's anticipations. Einstein's theory's capacity to withstand rigorous testing through falsifiability exemplified the essence of sound scientific inquiry.

Deductive reasoning commences with the premise that a particular phenomenon requires investigation to establish causal relationships. It takes a more focused and narrow approach, primarily concerned with testing or confirming hypotheses. Popper emphasised the importance of incorporating alternative hypotheses into research planning, with the primary aim of challenging and disproving the null hypothesis.

While some scientific studies may initially appear purely deductive, such as experiments designed to test the hypothesised effects of a treatment on an outcome, Popper recognised that most social research encompasses both inductive and deductive reasoning processes.

INDUCTIVE RESEARCH APPROACH

When there is little to no existing literature on a topic, it is common to perform inductive research because there is no theory to test. The inductive approach consists of three stages:

OBSERVATION

  • A budget ferry service has terrible service

  • Students X and Y have head lice

  • Dolphins depend on oxygen to survive.

OBSERVE A PATTERN

  • Another 50 budget ferry services have terrible service

  • All observed students have head lice.

  • All observed marine mammals depend on oxygen to survive

DEVELOP A THEORY

  • Budget ferry companies always have terrible service

  • All students have head lice

  • All marine mammals depend on oxygen to survive

LIMITATIONS OF THE INDUCTIVE APPROACH

 A conclusion drawn based on an inductive method can never be proven, but it can be invalidated.

EXAMPLE:
You observe 2000 budget ferry services, which all have terrible service, which aligns with your theory. However, you can never prove that ferry number 2001 will also have terrible service; still, the larger the dataset, the more reliable the conclusion.

DEDUCTIVE RESEARCH APPROACH

When conducting deductive research, you always start with a theory (the result of inductive research). Reasoning deductively means testing these theories. If there is no theory yet, you cannot conduct deductive research. The deductive research approach consists of four stages:

STEP 1: START WITH AN EXISTING THEORY

  • Budget ferry service always has terrible service

  • All students have head lice

  • All marine mammals depend on oxygen to survive

FORMULATE A HYPOTHESIS BASED ON AN EXISTING INDUCTIVE THEORY

  • If passengers travel with budget ferry service, they will always have terrible services

  • All students in high school have head lice

  • All marine mammals depend on oxygen to survive

STEP 2 COLLECT DAT TO TEST THE HYPOTHESIS

  • Collect data from passengers who travel with budget ferry companies

  • Test all students in senior school for head lice

  • Study all marine mammals to see if they depend on oxygen.

ANALYSE THE DATA: DOES THE DATA REJECT OR SUPPORT THE HYPOTHESIS?

  • 175 out of 200 Budget ferry services do not have terrible service = reject the hypothesis

  • 984 out of1000 students didn’t have head lice = reject the hypothesis

  • All marine mammals depend on oxygen to survive = support the hypothesis

 LIMITATIONS OF THE DEDUCTIVE APPROACH

The conclusions of deductive reasoning can only be true if all the premises set in the inductive study are true and the terms are clear.

EXAMPLE:

  • All students have head lice (premise)

  • Betty is a student (premise)

  • Betty has head lice (conclusion)

Based on the premises we have, the conclusion must be true. However, if the first premise turns out to be false, the conclusion that Betty has head lice cannot be relied upon

SCIENCE HAS TO HAVE A SPECIFIC PARADIGM – A SPECIFIC IDEA OR A BELIEF SYSTEM:

In major sciences, paradigms are fundamental frameworks encapsulating commonly accepted views about a subject. Essentially, a scientific paradigm functions as a collective belief system, guiding the direction of research and shaping treatment approaches;

Psychology does not have these structures because it is an umbrella term for many different belief systems about behaviour. For example, biological, cognitive, behavioural, psychodynamic, and humanist approaches exist. For these reasons, nobody can ever use the word psychologist as a collective noun because psychologists rarely agree. Even summarising psychology as the study of the human brain or psyche proves challenging because not all psychologists consider the brain or psyche essential for comprehending the complexities of human functioning.

Humanists and behaviourists don’t rate the brain at all, and only a psychodynamic theorist believe there’s a psyche. Conversely, to neuroscientists and evolutionary psychologists, everything psychological is ultimately biological, with the brain taking the lead role. Indeed, neuroscientists believe that all behaviour is caused by biological processes such as genes and hormones. Consequently, their approach to psychological issues involves exclusively employing biological interventions like drugs and surgery. For example,, they would not endorse psychoanalysis as a treatment for depression, as they dismiss the idea of unconscious trauma. This divergence underscores the intricate and multifaceted nature of the field of psychology.

PARADIGM SHIFTS

A paradigm shift, a concept identified by Thomas Kuhn, refers to a significant and transformative change in the fundamental concepts and experimental approaches within a particular scientific discipline. It involves a departure from established norms and adopting new perspectives or methodologies that reshape the understanding and direction of scientific inquiry in that field. Thomas Kuhn's work, particularly in "The Structure of Scientific Revolutions," has been influential in describing how scientific revolutions occur through these paradigm shifts.

Advancements in scientific tools contribute to paradigm shifts by providing new observation, measurement, and analysis methods. In the case of Freud and humanism, technologies and research methodologies have evolved, allowing psychologists to scrutinize and test hypotheses more effectively. As a result, if empirical evidence contradicts or challenges the foundational principles of a particular psychological paradigm, it can lead to a reassessment and a potential shift in the prevailing scientific viewpoint.

One notable example of a paradigm shift is the transition from classical Newtonian physics to Albert Einstein's theory of relativity in the early 20th century. Newtonian physics, which dominated scientific thought for centuries, provided a deterministic and absolute framework for understanding the motion of objects in space and time.

Einstein's theory of relativity, introduced in the early 1900s, represented a significant departure from Newtonian physics. The key ideas, particularly those in the special theory of relativity (1905) and the general theory of relativity (1915), challenged fundamental assumptions about space, time, and gravity.

This paradigm shift had profound implications for our understanding of the physical universe, challenging long-held intuitions and leading to a more accurate description of phenomena,

In 1990, the American physicist and philosopher Thomas Kuhn suggested that psychology could only be considered a pre-science due to the prevailing lack of consensus within the field. By this, he implied that psychology had not yet achieved the status of a fully developed science, leaving the possibility that it might attain that status in the future.

In the past, various psychological approaches were deeply entrenched in their foundational beliefs, resistant to any deviations from established rules. For instance, behaviourism once vehemently rejected the relevance of the brain, while neuroscience understated the importance of environmental factors. However, advancements in brain plasticity have demonstrated that environmental influences, particularly in critical periods, significantly impact brain development. The intricacies of human beings emerge from a combination of factors, encompassing the environment, parenting, and genetics.

Regrettably, the reluctance of psychological approaches to embrace alternative perspectives impeded the evolution of psychology into a fully-fledged science. A more constructive approach would have entailed integrating diverse theories, underpinned by research, into a comprehensive psychology paradigm.

REPLICABILITY

Replication is crucial in psychology and science for several reasons:

  1. Verification of Findings: Replication allows researchers to verify the validity and reliability of initial findings. If independent researchers can consistently reproduce a study's results, it lends more credibility to the original findings.

  2. Generalizability: Replication helps establish the generalizability of research findings. If the same results can be obtained across different settings, populations, or under varied conditions, it suggests that the findings have broader applicability and are not limited to specific circumstances.

  3. Error Detection: Replication aids in detecting errors or biases in the original study. If a study's findings are not replicable, it raises questions about the methodology, statistical analysis, or other potential sources of error in the initial research.

  4. Cumulative Knowledge: Replication contributes to the accumulation of scientific knowledge. Successful replications build a robust body of evidence, strengthening the scientific understanding of a phenomenon.

  5. Identification of Boundary Conditions: Replication helps identify the boundary conditions of a theory or hypothesis. Researchers can explore whether certain factors influence the generalizability of findings and under what conditions they may or may not hold.

  6. Scientific Progress: Science is an iterative process; replication is integral to scientific progress. It ensures that theories and findings stand up to scrutiny over time, refining our understanding and leading to the development of more accurate and comprehensive models.

  7. Preventing Fraud: Replication acts as a safeguard against scientific fraud. If a study's results are fabricated or manipulated, it becomes more challenging for fraudulent findings to withstand independent verification through replication.

  8. Building Consensus: Scientific consensus is built upon replicated evidence. When multiple studies independently replicate similar findings, it strengthens the scientific community's confidence in the reliability of those findings.

Overall, replication is a cornerstone of the scientific method, contributing to scientific knowledge's reliability, validity, and robustness.

SCIENCE MUST BE NOMOTHETIC.

Scientific endeavours should strive to be nomothetic, meaning they seek to establish general laws or principles that can be universally applied. In the context of science, nomothetic approaches aim to discover and formulate broad patterns, regularities, or laws that hold across different situations and cases. This contrasts with idiographic approaches, which focus on understanding and describing individual cases' unique and specific aspects.

The call for nomothetic science suggests a commitment to uncovering generalizable truths and principles that can contribute to a more comprehensive understanding of natural phenomena. This approach is common in many scientific disciplines, particularly those that aim to formulate theories and laws with broad applicability.

COUNTER ARGUMENT: IDIOGRAPHIC METHODS

The counterargument is that Allport claims that psychology should be concerned with studying unique individuals and be idiographic: “A person’s subjective experience of the world is an important and influential factor on their behaviour. Only by seeing the world from the individual’s point of view can we understand why they act the way they do.” Moreover, it is also very rare in psychology that law is found in terms of a cause-and-effect relationship. For example, a stomach ulcer might be said to be caused by psychological stress; however, other factors such as excess stomach acid, bad diet or a physiological predisposition may cause it too. However, it has been argued that even in natural sciences, a purely nomothetic approach is impossible; thus, making the distinction between natural and human science is a false dichotomy.

WE NEED BOTH METHODS

Nomothetic and idiographic approaches are both relevant in various contexts.

For example, in the medical emergency scenario of a suspected stroke, rapid recognition and treatment of stroke signs, such as asymmetrical facial features, slurred speech, and limb weakness, are paramount for ensuring timely and effective medical intervention.

If patients were treated idiographically in the acute phase of a stroke, tailoring their treatment as if their symptoms were unique to them, significant delays in providing appropriate and evidence-based treatments, such as administering clot-busting drugs or performing other interventions, would likely result in extensive brain damage and a worsened prognosis.

In contrast, the nomothetic approach would be more applicable here. Grounded in standardized protocols, addressing a condition where the physiological manifestations exhibit consistent patterns across people is crucial, allowing for quick and potentially life-saving interventions. This approach is suitable when certain conditions display uniform and identifiable characteristics across individuals.

In contrast, when dealing with psychiatric issues, a strictly nomothetic approach, such as a one-size-fits-all magic pill, may not be suitable. Psychiatric illnesses often have diverse origins and treatment paths unique to individuals. An idiographic approach is more relevant here, as it considers the individual patient's specific circumstances, history, and symptoms. Tailoring treatments based on a nuanced understanding of the patient's unique factors is essential for effective psychiatric care.

In summary, the nomothetic approach is valuable in quickly identifying and treating certain medical emergencies with standardized protocols, as seen in stroke cases. However, a more idiographic approach is necessary in fields like psychiatry, where individualised understanding and treatment plans account for the diverse and nuanced nature of mental health conditions.

QUANTITATIVE VERSUS QUALITATIVE DATA

Quantitative data is considered a key feature of science for several reasons:

  1. Precision and Measurability: Quantitative data involves numerical measurements, providing precision for accurate and consistent comparisons. This precision is essential for scientific inquiry, enabling researchers to quantify and measure phenomena with a high degree of accuracy.

  2. Objectivity: Quantitative data is often perceived as more objective than qualitative data. Numerical values are less prone to interpretation and subjectivity, reducing the potential for bias in the analysis and interpretation of results.

  3. Statistical Analysis: Quantitative data lends itself to statistical analysis, allowing researchers to identify patterns, trends, and relationships within the data. Statistical methods provide a rigorous framework for drawing conclusions and inferences about populations based on sample data.

  4. Generalizability: Quantitative research often aims for generalizability. Using numerical data allows researchers to make predictions and generalize findings to broader populations. This is crucial for developing theories and principles that can be applied beyond the specific context of a single study.

  5. Replicability: Quantitative research is highly replicable. Using standardized measures and statistical procedures facilitates the replication of studies by other researchers, promoting the validation and verification of results.

  6. Objectivity in Observation: Quantitative research often relies on controlled and structured observations, contributing to objectivity. Researchers can design experiments with clear procedures, reducing the influence of personal bias in data collection.

  7. Quantifiable Variables: Many scientific phenomena involve variables that can be quantified, such as temperature, time, weight, or concentration. Quantitative data allows researchers to measure and analyze changes in these variables systematically.

  8. Hypothesis Testing: Quantitative research is well-suited for hypothesis testing. Researchers can formulate hypotheses that can be tested using statistical methods, allowing for rejecting or accepting hypotheses based on empirical evidence.

COUNTER ARGUMENT: QUALITATIVE DATA

The counterargument highlights concerns about the potential bias introduced by quantitative data, particularly when respondents are constrained to pre-set answers that may not capture the full complexity of their experiences or perspectives. While quantitative data plays a crucial role in scientific investigations, the choice between quantitative and qualitative methods should be guided by the research question and the nature of the studied phenomena.

Quantitative Data Bias:

  1. Limited Response Options: The argument suggests that pre-set response options in quantitative surveys or experiments may restrict participants' ability to express nuanced or diverse viewpoints. This limitation could result in oversimplified or inaccurate representations of individuals' experiences.

  2. Subjective Nature of Measurement: Despite their precision, quantitative measurements may not fully capture subjective aspects of phenomena. Variables quantified numerically might not encompass the richness of qualitative attributes that participants could convey through open-ended responses.

Role of Qualitative Data:

  1. Understanding Complexity: Qualitative data is praised for capturing the richness and complexity of human experiences. It allows researchers to explore in-depth the perspectives, motivations, and contexts that quantitative measures might overlook.

  2. Exploratory and Descriptive Research: Qualitative methods are particularly well-suited for exploratory and descriptive research, where the goal is to generate insights and understand the underlying dynamics of a phenomenon. This approach is valuable when the research question requires a deeper understanding rather than numerical measurement.

  3. Flexibility in Data Collection: Qualitative research often involves flexible data collection methods such as interviews, focus groups, or observations, enabling researchers to adapt their approach based on emerging themes or unexpected findings.

In conclusion, while quantitative data is indispensable for certain scientific inquiries, researchers should consider the limitations and potential biases associated with pre-set responses. Integrating quantitative and qualitative methods, known as mixed-methods research, can offer a more comprehensive and nuanced understanding of complex phenomena. The choice between methods should align with the research goals and the depth of insight required for a study.

VARIABLES MUST BE OPERATIONALISED

You can’t just say you are measuring memory, for instance, as that is too much of a big concept and very vague; you would have to narrow it down to an aspect of memory, like being able to recall digit/letter arrays). Otherwise, your variables are vague, invalid and open to subjective interpretation. The counterargument is. However, some psychologists argue that when you operationalise an IV, it becomes too specific to tell us much about human behaviour, e.g. what does being able to recall digital/letter arrays tell us about human memory? In the ‘hard sciences, variables are easily operationalised. They use voltage, amps and grams when conducting research, which is easily described, understood and quantifiable.

COUNTER ARGUMENT TO VARIABLES MUST BE OPERATIONALISED

However, psychologists often investigate areas they cannot easily quantify. For example, if they look at stress, they may find indirect variables, such as sweat, but they cannot firmly say that this is a direct variable linked to stressful situations; it could be that the person is just sweaty. Moreover, psychologists face problems when measuring other internal emotional states like anger, jealousy, and love. How can you operationalise such terms? They are so subjective and abstract. Nevertheless, psychologists try to come up with some inventive measurements in the form of attitude scales, personality tests, intelligence tests, memory tests, etc. Critics would argue that these are not direct measurements and are often subject to cultural bias, lack of mundane realism, ecological validity, population validity, etc. External validity issues. Furthermore, there are internal validity issues as these kinds of tests often don’t measure what they are supposed to, e.g., does IQ

measure intelligence or the ability to perform well at logic and abstract reasoning? What about demand characteristics?