Monday, November 23, 2015

Student Research Spotlight - The Effects of Authority on Social Engineering Sensitive Information

Tedd Scheall Johnson
The Effects of Authority on Social Engineering Sensitive Information


Think of a time when you, or someone you know, were victimized, or potentially were a victim of some sort of scam. Perhaps it was an unexpected prize notification for which no entry can be personally recalled. Or, maybe a malicious group tried to trick you into giving them full access to personal online information. Such tactics capitalize on something referred to as “social engineering,” or methods of social interaction meant to deceive or manipulate others.

Of course, successful manipulation requires adequate knowledge and practice in social behavior. Discovering ways people are easily influenced before malicious scammers do would be effective in hopefully preventing them in the first place.

After a personal family experience, former student and graduate of BYU–H, Tedd Scheall Johnson, became interested in the psychology of social engineering and decided to focus his senior project on the topic.

Johnson was recently asked to describe his project experience.

How did you come up with your project idea and hypothesis? What sparked your interest?

I have always had an interest in Social Engineering, or "the art of human hacking." On TV or in movies, you see people pulling all kinds of cool "spy tricks" to get into a place, but in my work experience, it's more simple methods that are used to bypass security. A common method used in the Information Technology (IT) world is simply acting like you belong, and wandering into an area. I decided to look more into the actual studies that have been done, and was lead to the study on uniforms by Bickman. I decided to see if something similar could be done with personal sensitive information, which is usually a high target for theft.

Clothing and appearance, as influencing variables, became the basis for Johnson’s experiment. He hypothesized that undergraduate college students will be more likely to provide sensitive personal information—that is, their social security number and mother’s maiden name—on a form when asked by someone who is wearing a white lab coat and administering the form on a clipboard than someone in casual clothes with just a sheet of paper. This is based on previous research showing how persuasive a person can be, simply based on perceived authority.

However, performing an experiment that involves asking for specific information like this can be tricky. Many variables need to be controlled for. The main one being that participants must believe they are part of a completely separate experiment that is asking different questions. In a word: deception. In Psychology, we say this controls for, or prevents, the presence of “demand characteristics.” Demand characteristics are features of an experiment that may heavily suggest to a subject that they must act a certain way, thus compromising response integrity and authenticity. Though this is taking place in a laboratory, the goal is to match real life as much as possible. Imagine if Johnson were to ask subjects to participate in his study on “how professional clothing influences behavior.” They may immediately begin to make judgements and behave in ways contrary to what they might do without that explicit information. Malicious scammers must deceive in order to be effective, therefore scientific methods used to tease out those effects must also involve deception.

Johnson could not simply ask participants to fill out a demographics survey that just happens to request sensitive information. That’s too suspicious. So, he decided to team up with another student researcher working on their own senior project at the same time. He added his own required information on the other researcher’s survey form. This then allowed Johnson to control for demand characteristics because the subjects believed they were participating in a completely different study only. In exchange, Johnson aided his fellow researcher in running their experiment so that he could gather his own data as well.

Randomly, as Johnson and the other researcher brought individuals in the lab, they would either wear casual clothes, or professional attire, equipped with a lab coat and a clip board. Subjects were then given a demographics survey which asked for the sensitive information at the end of the page. Once the form was completed, subjects were instructed not to give the form back, but to first cut the bottom two items off with scissors, and destroy them via shredding. No sensitive information was ever seen by Johnson or the other researcher.

One might ask, “Were there not any who protested?” There were, indeed. Some wrote down information without reservation. Many paused with trepidation and disbelief, asking if the research really needed that information, but still provided it after some persuasion. Some flat-out refused to provide it, despite the researchers' persistence. In all circumstances, when a subject asked if the information was necessary, both Johnson and the other researcher would reply with, “It’s for the purposes of the study, please continue.” This phrase was repeated until subjects either complied or refused. (If they refused, they were allowed to continue the study without answering those two questions.) All responses were noted, and each participant was debriefed and notified of the purpose of that portion of the study before continuing.

After analysis, Johnson found results that were interestingly different than what the hypothesis predicted. In his sample population, there was no difference between groups. That being said, both groups (lab coat and no lab coat) contained quite a remarkable amount of subjects willing to provide sensitive information, regardless of authoritative appearance.

Johnson described these results on his project poster:

There are many reasons this could have happened, and I believe a follow-up experiment would be warranted. Due to the fact that the majority of participants provided their sensitive information regardless of the uniform of the researcher, I believe repeating the experiment on a larger campus where the researcher would be completely unknown to all participants (as well as all assumptions about the researcher due to the nature of a small religious campus) may yield significance.

An interesting point discovered was how many people were willing to provide this sensitive information, even after they had been hesitant to do so. Similar to Milgram’s experiment in 1965, those who showed hesitation were instructed by the researcher to ‘please provide the information asked for, so we may continue the experiment.’

Overall, it is clear that the problem of exploiting human trust still exists, and that further education and experiments are needed. Many students were unaware that this information is personal, and they that shouldn’t be giving it out without an actual valid reason. No pretext was used to suggest that this information was in any way needed for the experiment. While there was an approved IRB consent form, no participants asked for any kind of validation of their authenticity or approval.

This brings up another interesting point regarding the scientific method and the process of drawing conclusions. Results might tempt individuals to make broad assumptions which connect the experiment’s circumstances to the overall human population. Or in other cases, sometimes results simply do not pan out as predicted. Should that occur, the temptation may be to become discouraged that the information provides no utility, or that the methods used were inherently flawed. While those may be true, they happen only some of the time. More often, however, contradictory results are still useful. In Johnson’s experiment, for example, while the variable of professional appearance did not seem to have an effect, the researcher’s persistence in requesting information, and the apparent trust of the subjects appear to have a profound influence, which would be beneficial to examine in future studies. Often, results can be far more interesting than the hypothesis.

Johnson’s interview continues:

How did the 305 and 490 classes go for you?

I loved both classes! Both of them were difficult for different reasons, but had their own rewards. 305 laid a very strong foundation on how to properly do research, perform an experiment, and frankly prepare my project. There was a lot of work to be done on the papers and readings, but overall it was a wonderful experience. 490 was also great, because it allowed real-world application of what was learned in 305. The biggest difficulty in 490 was of course, actually running the experiment, which took more work in itself than I think I expected. I wish I had started collecting data sooner, and had the opportunity to collect MORE data. Analyzing the results and writing the conclusion were work as well, but at the point it felt rewarding to be finishing my project.

How did it feel when you finished your project and presented it to your peers and instructors?

Finishing my project was gratifying. It is a lot of work from the beginning of 305 to completion. The presentations were also a lot of work, but having done the "first round" presentation in 305 helped significantly. I think the best part about finishing my project, or at least the most exciting, was reviewing how I could rerun or modify the experiment. Ultimately, my findings were not significant, so I had to reject my hypothesis, which was a bummer, but to paraphrase Dr. Timothy: Run the experiment correctly, and the outcome is still worth the work.

Now that you have graduated, what to you hope to accomplish, or what do you aspire to do with your degree in psychology?

I've told people that my secret goal is to get my Ph.D. in psychology, my J.D. in law, and go be intimidating in court. I don't think this is likely to happen, but I do know that I will continue to use psychology every day of my life. I have found that it has given me a much better understanding of other people, as well as myself. It has helped me in both my personal and professional life. I will say that I am grateful for the heavy emphasis on statistics and research methods in our program, because both things have helped me in other fields as well. Ultimately my degree in psychology has taught me to think critically, and communicate my ideas more effectively. I love the program, and highly recommend everyone take the chance to learn at least a little bit about the field.

You can see Tedd's study in more detail, including statistical test results, by viewing his project poster here.

Article by Kyle Evan Madsen


Monday, September 14, 2015

Student Research Spotlight - Human-horse Interaction vs. Observation and Their Effects on Anxiety

Amy Van Leuven
Human-horse Interaction vs. Observation and Their Effects on Anxiety



Think of the last time you interacted positively with an animal. Perhaps it was when you affectionately cuddled with your pet dog or cat. Did you feel your level of stress decreasing? Did you feel your heart rate or blood pressure decrease as you became more relaxed? Or, think of the last time you looked at a picture or watched a video of an animal, such as a couple-week-old puppy. Do you recall what effect that had on your emotional state? Maybe most of us don’t have such keen insight into detecting decreased heart rate and blood pressure, but it’s not uncommon to hear others speak of positive effects from such interactions.

The idea that positive interaction with animals can be an effective way to relieve stress is not only common, but implemented in various forms of therapies. Even certain documented empirical cases have shown animal interaction to reduce stress or anxiety. Seeking to relieve stress through animal interaction was what first interested Amy Van Leuven when deciding her senior research topic. She noticed the literature on the subject seldom includes tests involving interaction with horses, specifically.

Because she is fond of them, she decided that using horses in her study would be the most interesting and rewarding course of action, despite potentially facing a “far more challenging” process than would choosing other projects, she said. Van Leuven also said being encouraged by her professor to follow a hypothesis that she cared about was “vital to her success.” Not only did she need to construct a methodological process to test her hypothesis the same as her peers, but she also needed to secure permission to use several live horses, which took more time and energy than the typical senior project. It was worth it, however, as the results would help inform an up-and-coming “Equine Therapy,” a cause Van Leuven “whole-heartedly wanted to pursue.”

Van Leuven recruited 16 subjects and had them participate in four separate sessions where they would either interact with a horse (i.e., ride, pet, or otherwise handle), or observe another person as they interacted with a horse—two sessions for the former, and two for the latter. She wanted to see if interaction would prove more effective than passive observation in relieving anxiety.

How did she check for, or measure stress? Van Leuven used three forms of measurement to analyze the anxiety levels of her participants in order to track possible change across conditions. The first is a questionnaire used for stress detection: the State Trait Anxiety Inventory (STAI). The other two were physiological measurements: a heart rate (HR) monitor and blood pressure (BP) gauge. In order to make comparisons, Van Leuven had each participant take the STAI survey and record HR and BP before and after animal interaction or observation. Then all she needed to do was subtract the before-score from the after-score to obtain a “change score.” This was done in each of the four sessions.

Interestingly, this scenario did not reveal a statistically significant difference in stress reduction between interaction and observation. Both seemed to be about equal in how much stress they reduced (resulting in a failure to reject the null hypothesis). However, Van Leuven did find significant differences when she treated all sessions as one group, essentially removing the interaction versus observation condition. She found participants dropped in both STAI scores and HR, indicating that even though there was no difference between interaction with, and observation of, horses, the experience still led to a certain degree of stress reduction.




When asked how the process in general played out for her, Van Leuven stated her seminar classes were “vigorous, exciting, sometimes painful, and exhausting." "But," she continues, "they hold some of my favorite memories from my entire college experience.”


After graduating, Van Leuven continued her path in equine therapy. She said,


“My ‘fire’ for equine therapy is still very much alive. It is my senior research that convinced me that this is what I wanted to do. Shortly after graduation, I obtained my PATH (psychological association of therapeutic horsemanship) certification allowing me to teach therapy-based lessons. I began a lesson program at Gunstock Ranch on Oahu. When it came time to move off of the island, I left the program in some very capable hands and am now working at Ascend Recovery in American Fork, Utah. It is an alcohol and addiction recovery center. I am the Equine Specialist, and conduct sessions with the patients several times a week. Thus far, it has seemed that their sessions are a place of relief, peace, and discovery. I absolutely love what I do and I would not be here if it were not for my seminar classes and the undying support of the Psychology department.”


You can see Amy's study in more detail by viewing her project poster here.

Article by Kyle Evan Madsen

Monday, August 24, 2015

Student Research Spotlight - Positive and Negative Messages and the Efficacy of Sports Drinks on Performance

Natalie DeMartini
Postive and Negative Messages and the Efficacy of Sports Drinks on Performance


To what extent do our preconceived ideas and beliefs effect an outcome?  According to Natalie DeMartini's research on the expectancy effects of sport drinks, one's preconceived belief may be enough to override placebo. Her experiment involved a number of clever methodological techniques designed to view a wide range of how a placebo interacts with belief.

Some placebos involve intentially deceiving an individual with a fake treatment. Typically, placebos are most used in the medical community, using double-blind experiments, to determine the effectiveness of a particular treatment. For example, researchers test a drug's active ingredients when comparing them to placebo pills which are made out of pure sugar or water. It's inclusion in research helps to determine if the active ingredient actually has an effect, rather than simply relying on a patient's subjective self-report. Often, the belief that a particular treatment has an effect will be enough for the patient to report alleviation of pain or other symptoms. The phrase, "double-blind experiment," merely indicates that neither the medical staff administering a drug, nor the patients that take it, are aware if the treatment contains an active ingredient, or if it's fake. All that's left to measure is the different effects in both groups of people to more clearly see if the active ingredient is more efficacious. In other cases, a placebo may be simply informing a subject that a particular effect will follow the administration of any given treatment. A "placebo effect" is when an individual subjectively reports that the treatment in question has had an effect that coincides with the deception. This is also someitmes referred to as a "self-fullfilling prophecy."

DeMartini was interested in seeing if the placebo effect occurs in the area of nutrition, namely, with sports drinks. When consuming these products, do they actually aid in our physical activity, or do we just think they do? How can one tell if sports drinks do what they are marketed to do? Can it simply be attributed to "mind over matter"? When trying to decide a subject from which to draw an hypothesis, DeMartini says she remembers when she was active in endurance sports, that she witnessed other athletes consume various types of nutrition products "almost dependently." "They feel that it's necessary to win," she continues, "even though they are supposed to be taken after calorie depletion, not before." She says that her hypothesis was born from wanting to see if a sports drink "actually has an effect beforehand, or if it precedes a self-fullfilling prophecy."

Demartini hypothesized that participants who are primed with a positive message about sports drinks"This is a new sports drink and it helps to increase athletic performance."would perform better in a physical activity after drinking a placebo sports drink than would those who are given a negative message"Sports drinks have been shown to have no effect on physical performance." 

She recruited 65 participants and randomly assigned them to one of three groups: positive, negative, and neutral. (The neutral group had no message given to them). She did not instruct the participants that they were in any particular type of group. Each subject ran one mile two different times. The second time occured 48 hours after the first and was subtracted from the first run time to calculate a difference score. She also gave instruction to consume a sports drink moments before the second run, saying that it is a brand new energy beverage and accompanied that with the priming message. Every drink given to all participants was simply a mixture of water and flavoring.

According to the placebo effect principle, one would expect the positive message group to perform better, right? It actually turned out the opposite at first. Demartini didn't find evidence for the placebo effect when she combined all participants together in her analysis.

However, DeMartini included one crucial step in her experiment. Each participant filled out a demographics form that included the following question: "Do you believe that sports drinks have an effect on your athletic performance?" The answers particpants gave helped to shape further analysis and reveal an interesting interaction between expectancy and placebo.

Particpants could answer by stating a positive belief"Yes, they work great"a negative belief"No, I don't think they help at all"or a neutral belief"I don't know."

It generally didn't matter what message the positive- and negative-belief individuals were given, they still performed according to their expectations. The placebo effect was observed, however, in the group of individuals who took a neutral stance. The ones given a positive message had a reduction in run time, while those given a negative message had an increase. Those not given any message at all had a slight decrease in run time, also indicating a possible placebo effect.

When asked how she felt about the process of her experiment, DeMartini recalled mainly two feelings: exhaustion and elation. "I didn't expect as many challenges to occur; It took a lot of planning and brainstorming to eliminate confounding variables," she said. "But," she added, "because I prepared so well in 305 [Research Methods], the project went smoothly." 

And finally, when asked what it was like in the end when she had the opportunity to present her project to her friends and peers, DeMartini smiled and noted, "It was very satisfying because I felt confident and proud of my project. I learned something new and was excited to share it." The satisfaction was easy to witness in her tone and words. And while laughing, she concluded, "I added to the world of science, and that's pretty awesome."

You can see Natalie's study in more detail by viewing her project poster here.

Article by Kyle Evan Madsen

Thursday, May 7, 2015

April 2015 Graduates

Congratulations and Ho'omaika'i 'ana to our April 2015 Psychology graduates, as well as our Summer 2015 graduates, who participated in commencement on Saturday, April 18th!


Psychology graduates and faculty at their Graduation Luncheon
From left to right: Dr. Jeff Burroughs, Dr. Eric Orr, Seini Cassandra Ita, Robert Tedd Scheall Johnson, Lena Aroha Hawaikirangi, Talyor Lynne Bobbitt, Jay Ryan Tomlinson, Ashley Saunders-McCutcheon, Linda Renea Hafoka, Jessica Lynn Enos, Dahlia Leena Gatoloai-Dahl, Dr. Brian Kinghorn, Aaron Ka Yu Fang, Joye Leilani Oronoz, Lin Da T. Bui, Sid Francis Edraira Balubal, Kristina Lee Larson, Mckenzie Patterson, Dr. Boyd Timothy

Graduates (alphabetical order):

Sid Francis Edraira Balubal
Taylor Lynne Bobbitt*
Lin Da T. Bui
Daniel Fagamanu Danielson
Jennifer Celeste Delgado-Kaka
Jessica Lynn Enos*
Catherine Erickson**
Aaron Ka Yu Fang*
Nathan Fuluvaka
Dahlia Leena Gatoloai-Dahl
Linda Renea Hafoka
Lena Aroha Hawaikirangi
Seini Cassandra Ita
Ahra Jo***
Robert Tedd Scheall Johnson*
Kristina Lee Larson**
Graham Patrick Olson
Joye Leilani Oronoz
McKenzie Patterson***
Ashley Saunders-McCutcheon
David Wesley Staves**
Jay Ryan Tomlinson***

***Summa Cum Laude   **Magna Cum Laude   *Cum Laude