Psychology of self

Self-Worth First

Self-WorthThe dictionary defines self-worth as “the sense of one’s own value or worth as a person.” However, there are many ways for a person to value themselves and assess their worth as a human being, and some of these are more psychologically beneficial than others. In this article, we discuss the value of true self-worth, how to build this type of self-worth and why so many of us lack a feeling of worthiness.

The psychology of self is the study of either the cognitive, conative or affective representation of one's identity or the subject of experience. The earliest formulation of the self in modern psychology derived from the distinction between the self as I, the subjective knower, and the self as Me, the object that is known.[1]

Current views of the self in psychology position the self as playing an integral part in human motivation, cognition, affect, and social identity.[2] It may be the case that we can now usefully attempt to ground experience of self in a neural process with cognitive consequences, which will give us insight into the elements of which the complex multiply situated selves of modern identity are composed.

The self has many facets that help make up integral parts of it, such as self-awareness, self-esteem, self-knowledge, and self-perception. All parts of the self enable people to alter, change, add, and modify aspects of themselves in order to gain social acceptance in society. "Probably, the best account of the origins of selfhood is that the self comes into being at the interface between the inner biological processes of the human body and the sociocultural network to which the person belongs."[3]

                                                    Being Human

What Is Being Human

Being human can be defined as displaying characteristics that are unique to human beings. This can be analyzed in a philosophical perspective as well as in an introspective analysis of oneself. The meaning of being human can vary from one person to another.

For example, one might think that being human means showing humane qualities like kindness, empathy, generosity etc. Another might think that being human is the ability to choose between right and wrong while yet another think that being human is displaying all natural human characteristics like anger, pity, jealousy, love etc.

The meaning of being human also differs according to various factors like religion, nationality, family background etc.

My Intution tells me that through out the Ages,we as Humans have turned, what is simply "being Human" into such a complicated,chaotic under-taking,no one clearly can say, "This is what a Human is"

        This is what we gotta to do to be "Human"

        This is What being "Human" really is,here are the things to expect on this "Journey" called being Human

Human Being vs Being Human

Opening to the Shadow Self - a post on Dr.Caldwell's site that for my self,I can relate to well

Understanding / being aware of our Bias

My question "How well can we Really see Ourselves?"


Human subjects in psychology and sociology

Stanford prison experiment

A study conducted by Philip Zimbardo in 1971 examined the effect of social roles on college students at Stanford University. Twenty-four male students were assigned to a random role of a prisoner or guard to simulate a mock prison in one of Stanford's basements. After only six days, the abusive behavior of the guards and the psychological suffering of prisoners proved significant enough to halt the two-week-long experiment.[12] Human subjects play a role in this experiment. This study would show whether or not prisoners and guards have conflict which make conflict inevitable. This conflict would be due to possible sadistic behavior of guards (dispositional) or due to the hostile environment of the prison (positional). Due to the fact that prisoners could lack respect for the law and guards could behave in a hostile manner due to the power structure of the social environment that are within prisons. Yet, if prisoners and guards behaved in a non aggressive way, this would support the dispositional hypothesis. If the prisoners were just to behave in the same way that people did in real life, this would support the positional hypotheses. Using human subjects for this experiment is vital because the results is based on the way a human would react, with behaviors only humans obtain. Human subjects are the most way to get successful results from this type of experiment. The results of this experiment showed that people will readily conform to the specific social roles they are supposed to play. The prison environment played a part in making the guards behavior more brutal, due to the fact that none of the participants showed this type of behavior beforehand. Most of the guards had a hard time believing they had been acting in such ways. This evidence concludes this to be positional behavior, meaning the behavior was due to the hostile environment of the prison. [13]

Milgram experiment

In 1961, Yale University psychologist Stanley Milgram led a series of experiments to determine to what extent an individual would obey instructions given by an experimenter. Placed in a room with the experimenter, subjects played the role of a "teacher" to a "learner" situated in a separate room. The subjects were instructed to administer an electric shock to the learner when the learner answered incorrectly to a set of questions. The intensity of this electric shock was to be increased for every incorrect answer. The learner was a confederate (i.e. actor), and the shocks were faked, but the subjects were led to believe otherwise. Both prerecorded sounds of electric shocks and the confederate's pleas for the punishment to stop were audible to the "teacher" throughout the experiment. When the subject raised questions or paused, the experimenter insisted that the experiment should continue. Despite widespread speculation that most participants would not continue to "shock" the learner, 65 percent of participants in Milgram's initial trial complied until the end of the experiment, continuing to administer shocks to the confederate with purported intensities of up to "450 volts".[14][15] Although many participants questioned the experimenter and displayed various signs of discomfort, when the experiment was repeated, 65 percent of subjects were willing to obey instructions to administer the shocks through the final one.[16]

Asch conformity experiments

Psychologist Solomon Asch's classic conformity experiment in 1951 involved one subject participant and multiple confederates; they were asked to provide answers to a variety of different low-difficulty questions.[17] In every scenario, the multiple confederates gave their answers in turn, and the subject participant subject was allowed to answer last. In a control group of participants, the percentage of error was less than one percent. However, when the confederates unanimously chose an incorrect answer, 75 percent of the subject participants agreed with the majority at least once. The study has been regarded as significant evidence for the power of social influence and conformity.[18]

Robber's Cave study

A classic advocate of Realistic conflict theory, Muzafer Sherif's Robber's Cave experiment shed light on how group competition can foster hostility and prejudice.[19] In the 1961 study, two groups of ten boys each who were not "naturally" hostile were grouped together without knowledge of one another in Robber's Cave State Park, Oklahoma.[20] The twelve-year-old boys bonded with their own groups for a week before the groups were set in competition with each other in games such as tug-of-war and football. In light of this competition, the groups resorted to name-calling and other displays of resentment, such as burning the other group's team flag. The hostility continued and worsened until the end of the three-week study, when the groups were forced to work together to solve problems.[20]

Bystander effect

The bystander effect is demonstrated in a series of famous experiments by Bibb Latane and John Darley[20] In each of these experiments, participants were confronted with a type of emergency, such as the witnessing of a seizure or smoke entering through air vents. A common phenomenon was observed that as the number of witnesses or "bystanders" increases, so does the time it takes for individuals to respond to the emergency. This effect has been shown to promote the diffusion of responsibility by concluding that, when surrounded by others, the individual expects someone else to take action.[20]

Cognitive dissonance  "I find it interesting this is one of the topics here"-me

Human subjects have been commonly used in experiments testing the theory of cognitive dissonance after the landmark study by Leon Festinger and Merrill Carlsmith.[21] In 1959, Festinger and Carlsmith devised a situation in which participants would undergo excessively tedious and monotonous tasks. After the completion of these tasks, the subjects were instructed to help the experiment continue in exchange for a variable amount of money. All the subjects had to do was simply inform the next "student" waiting outside the testing area (who was secretly a confederate) that the tasks involved in the experiment were interesting and enjoyable. It was expected that the participants wouldn't fully agree with the information they were imparting to the student, and after complying, half of the participants were awarded $1, and the others were awarded $20. A subsequent survey showed that, by a large margin, those who received less money for essentially "lying" to the student came to believe that the tasks were far more enjoyable than their highly paid counterparts.[21]

Vehicle safety

Throughout the years, many studies have been done on human subjects aiding towards a greater purpose. Human subject research is used across many industries, with one of those being the automotive industry. Research has shown that civilian volunteers decided to participate in vehicle safety research to help automobile designers create more impactful and sustainable safety restraints for vehicles. This research allows designers to inquire more data on the tolerance of a human body in the event of an automobile accident to better improve safety features in automobiles. Some of the tests conducted ranged from sled runs evaluating head-neck injuries, airbag tests, and even tests involving military vehicles and their constraint systems. It is important to note that from thousands of tests involving human subjects, results indicate no serious injuries were persistent. This fact is largely due to the preparation efforts of the researchers to ensure all ethical guidelines are followed and to ensure the safety and well-being of their subjects. Although this research provides positive contributions, there are some drawbacks and resistance to human subject research for crash testing due to the liability of injury and the lack of facilities that have appropriate machinery to perform such experiments. Overall, the experiments have helped contribute to the knowledge of human tolerance for injury in crash impacts. This research is additional data from which testing with cadavers or crash test dummies would prevent us from discovering. Cadavers and crash test dummies still provide meaningful purpose when testing for higher tolerance tests beyond human capability. [22]

Definition of Human Subjects Research

Human subjects research is any research or clinical investigation that involves human subjects.

Investigators conducting human subjects research must satisfy DHHS regulations [45 CFR Part 46] and FDA regulations [21 CFR Part 50 and 56] regarding the protection of human subjects research, as applicable. When considering whether an activity meets the definition of human subjects research per DHHS regulations one must consider two federal definitions: research and human subject.

Research is as a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.

A "systematic investigation" is an activity that involves a prospective plan that incorporates data collection, either quantitative or qualitative, and data analysis to answer a question.

Examples of systematic investigations include:

  • surveys and questionnaires
  • interviews and focus groups
  • analyses of existing data or biological specimens
  • epidemiological studies
  • evaluations of social or educational programs
  • cognitive and perceptual experiments
  • medical chart review studies

The five factors are:

  • Openness to experience (inventive/curious vs. consistent/cautious). Appreciation for art, emotion, adventure, unusual ideas, curiosity, and variety of experience. Openness reflects the degree of intellectual curiosity, creativity and a preference for novelty and variety a person has. It is also described as the extent to which a person is imaginative or independent and depicts a personal preference for a variety of activities over a strict routine. High openness can be perceived as unpredictability or lack of focus, and more likely to engage in risky behaviour or drug taking.[4] Also, individuals that have high openness tend to lean towards being artists or writers in regards to being creative and appreciate of the significance of the intellectual and artistic pursuits.[5] Moreover, individuals with high openness are said to pursue self-actualization specifically by seeking out intense, euphoric experiences. Conversely, those with low openness seek to gain fulfillment through perseverance and are characterized as pragmatic and data-driven—sometimes even perceived to be dogmatic and closed-minded. Some disagreement remains about how to interpret and contextualize the openness factor.[clarification needed]
  • Conscientiousness (efficient/organized vs. easy-going/careless). A tendency to be organized and dependable, show self-discipline, act dutifully, aim for achievement, and prefer planned rather than spontaneous behavior. High conscientiousness is often perceived as stubbornness and obsession. Low conscientiousness is associated with flexibility and spontaneity, but can also appear as sloppiness and lack of reliability.[6]
  • Extraversion (outgoing/energetic vs. solitary/reserved). Energy, positive emotions, surgency, assertiveness, sociability and the tendency to seek stimulation in the company of others, and talkativeness. High extraversion is often perceived as attention-seeking and domineering. Low extraversion causes a reserved, reflective personality, which can be perceived as aloof or self-absorbed.[6] Extroverted people tend to be more dominant in social settings, opposed to introverted people who may act more shy and reserved in this setting.[7]
  • Agreeableness (friendly/compassionate vs. challenging/detached). A tendency to be compassionate and cooperative rather than suspicious and antagonistic towards others. It is also a measure of one's trusting and helpful nature, and whether a person is generally well-tempered or not. High agreeableness is often seen as naive or submissive. Low agreeableness personalities are often competitive or challenging people, which can be seen as argumentative or untrustworthy.[6]
  • Neuroticism (sensitive/nervous vs. secure/confident). Neuroticism identifies certain people who are more prone to psychological stress.[8] The tendency to experience unpleasant emotions easily, such as anger, anxiety, depression, and vulnerability. Neuroticism also refers to the degree of emotional stability and impulse control and is sometimes referred to by its low pole, "emotional stability". A high stability manifests itself as a stable and calm personality, but can be seen as uninspiring and unconcerned. A low stability expresses as a reactive and excitable personality, often very dynamic individuals, but they can be perceived as unstable or insecure.[6] It has also been researched that individuals with higher levels of tested neuroticism tend to have worse psychological well being

Søren Kierkegaard is generally considered to have been the first existentialist philosopher,[2][10][11] though he did not use the term existentialism.[12] He proposed that each individual—not society or religion—is solely responsible for giving meaning to life and living it passionately and sincerely, or "authentically".[13][14] Existentialism became popular in the years following World War II, and strongly influenced many disciplines besides philosophy, including theology, drama, art, literature, and psychology- wikipedia