Lesson 2: Digital Tools and Ethics
When an organization produces a digital public relations campaign, it as an assortment of tools at its disposal. Knowing how to ethically use these tools can make the difference between a successful campaign and one that ends in crisis. Tools that are regularly used in public relations campaigns include cookies, tracking software, personalized integrated marketing and big-databases. This lesson talks about each of these specifically, as well as some of the ethical traps that practitioners need to be aware of when creating their own campaigns.
Each of these tools is designed to give a practitioner insight into public behavior, opinions, interests and dislikes. Usually, this information is gathered through tracking software. Organizations can track and monitor consumer behavior by archiving items that are clicked on, added to a cart, or looked at when on their website. This information is stored on a person’s computer using “cookies” or invisible micro-bits of memory that are easily accessed and tracked by a company. If you have ever noticed a Facebook advertisement for the exact pair of shoes you just looked at on Amazon, you have seen the product of cookies. Companies can use cookies to tailor advertising content to your specific shopping or browsing history, therefore increasing the relevancy of the ad to your life.
Cookies are just one way that organizations can get valuable information about the public. Many organizations buy data-supplies from big-data warehouses which sell the results of their own tracking and analysis. Big-data warehouses will sell a company thousands of pieces of information about specific consumers or a general group of potential customers. Companies can pay for this service, which allows them to focus on tailoring campaigns rather than collecting data. If you have ever received an ad via email with your name in the heading, this is a sign the company purchased data about you from a big-data warehouse.
Next Page: Public Anxiety of Digital CampaignsPublic Anxiety of Digital Campaigns
Most of the public already knows that they are being tracked and monitored. This understandably raises their anxiety about online activity. Researchers know that the public often has anxiety about the way their digital information and behaviors are collected, analyzed and used by public relations practitioners as they develop campaigns. Before creating a campaign, practitioners should know what makes the public anxious so they know what to be sensitive to and aware of.
K. Kernaghan (2014) hierarchically listed three major sources of public anxiety regarding corporate use of digital technologies:
- The public was most concerned with their diminished privacy and the targeting of campaigns based on personal information (obtained through tracking software). The public was uncomfortable by campaigns that were too tailored to their specific interests or browsing history. The use of a customer’s name, demographic information, address, or family information in tailored content was considered a violation of privacy.
- The public was concerned about the hidden messages within complex end-user license agreements that may result in unfair corporate advantages over individual rights. When logging into a new website, users are frequently asked to agree to a lengthy end-user license agreement. Most people do not read the agreement before pressing “agree.” As a result, the public feared that they may agree to something hidden or unethical without their knowledge. For example, the public was worried that they may have given up their privacy or personal information to third parties or external companies.
- The public was concerned with the perceived lack of legal policies that regulated digital campaigns and corporate actions. As mentioned in lesson one, many of the digital platforms used by public relations practitioners are still new. Although there are some instances of public relations strategies that have gone before the courts (like scrubbing), many exist in a legal gray area, and it is unclear if there are any regulations on their use. This is perceived by the public as if there is a lack of protection of users, and instead open to manipulation by organizations.
By knowing the public's three fears of digital campaigns, it is easier for practitioners to develop tactics that ethically use digital tools without provoking anxiety. For example, a campaign should not directly divulge personal information about a specific user (such as name or address). Or, graphics specialists can work with the legal team to help clarify an end-user-license agreement through visuals to help educate and inform the public about the contracts meaning. Finally, practitioners can reinforce the organization’s stance on ethics by advising against tools that may be legal, but may provoke anxiety in the public.
Tracking Software
When an organization wants to learn about the public’s use of digital media, it relies on tracking software. This software allows practitioners to gain insight into details of the public’s life without directly communicating with them. These observations can then be used to personalize campaigns. For example, tracking software might tell a practitioner that a customer is shopping online for new sneakers. An organization then adds pictures of sneakers to marketing-communication such as emails, digital advertisements or social media posts. The goal is to make the campaign tailored to the person so they notice the relevance to their own life, without making the person feel monitored or watched.
Usually, the integration of information from tracking software into a personalized promotion is an automated process, meaning a computer system does the work for the practitioner. However, practitioners should still monitor these campaigns to ensure that data is being used responsibly and customers do not feel as if their private information is being used inappropriately.
In addition, storing data about customers carefully is also important to maintain the ethical integrity of the organization. For example, when an organization collects information about customers for shopping rewards programs, it is important to make sure the data collected is protected from hackers or third parties who may use the information unethically. Sometimes this sensitive information can include data such as social security numbers, credit card information and phone numbers.
In 2012, inadequate protections caused a data breach and let hackers access more than 70 million customers’ social security numbers from Target. This left half of winter holiday shoppers vulnerable to identity theft. Although Target had its own system of protecting user data, the unethical actions of hackers caused a crisis. This demonstrates the need for organizations to use substantial methods of protection for the data they collect from customers.
Although the use of tracking software has become an industry norm, it is still important to assess it against Bowen’s 15 Ethical Guidelines for Digital Public Relations. Seemingly, four of the 15 guidelines are compromised by the use of tracking software.
- Tracking software violates the “avoid deception” guideline. Although research has shown that users know that they are being tracked to some extent, they generally are unaware the extent of the information gathered. They are also generally unaware of the process of tracking, the management and protection of their data, and who else has access to their information. These are important details that are frequently undisclosed to users, but may impact their well-being and trust in the organization.
- Using tracking data violates the “eschew secrecy” guideline. Most practitioners intentionally do not tell users when their data is being used to tailor content or build a campaign. This type of secrecy has the potential to alienate customers who feel as if they are uninformed about the use of their own information.
- Tracking data does not allow the public to “clearly identify” where information is coming from. Bowen recommends telling consumers the origins of messages and content. By not disclosing the process of tracking data, users cannot identify the source of a message or how a communicator got their information.
- Because so much of the tracking and personalization process is automated, it is difficult to “verify sources and data.” There is no individual public relations practitioner who is responsible for ensuring the data collected is accurate.
Consider what would happen if you lent your computer to a friend. Data about your friend’s use would be attributed to you, even if you have different tastes or interests than your friend. There is no way to verify the accuracy of the information that is being collected which can lead to major ethical problems if you inaccurately tailor messages.
Target Example
Consider a now famous example from 2009. After a teenage girl purchased a pregnancy test from Target using her credit card, the retailer began sending baby-flyers and coupons to her address. Her father, irate, complained to the store for their inappropriate tailored advertisements because he was unaware of his daughter’s pregnancy. Weeks later, he apologized learning that she was in fact pregnant- something the store knew about his daughter before he did. Although in this example Target did accurately tailor the advertisement to the teenager, had the pregnancy test been negative, the flyers would once again have been labeled an invasion of privacy.
The Target example raises questions about the ethical use of tracking software. There is simply no way for customers to opt-out of the tracking, meaning they receive targeted ads no matter what. In the case of the above teenager, this meant revealing a sensitive secret to her family. When applying the “rational analysis” guideline from Bowen, it is likely that this is not a tactic that practitioners would want used on themselves.
Ethical Use of Tracking Software
So how can tracking software be used ethically? First, telling users about the tracking process and any software used adds transparency to the process. If a website uses cookies to collect data, being upfront with users may help reduce the anger or frustration that develops from the use of collected information. In addition, telling customers about the way the organization protects the data is also helpful. This disclosure may help reduce anxiety and increase trust that the organization cares about the public’s wellbeing. In addition, should any data breaches occur, reaching out to customers and impacted members of the public once the problem is identified will help individuals protect themselves as soon as possible.
Finally, providing the public a way to opt out of the data collection and tracking also helps build a conscientious and responsible public image. This can be accomplished by providing an “unsubscribe” button on all digital communication, as well as reminding users they can turn off “cookies” which will temporarily stop tracking programs on their computers.
Although most companies engage in tracking software, such as cookies, other organizations can purchase data from big-datasets and big-data warehouses. The three largest US digital warehouses of information are the largest warehouses are HPE, Spokeo, are Hadoop. You can go to their websites to see what type of information they may have on you or people like you.
These big-data warehouses gather data from a variety of sources and then consolidate or triangulate the data to compile profiles of information on users. This includes collecting data from social media profiles, shopping histories, credit card statements, and even tax files. Recently, controversy has surrounded the use and gathering of data from electronic fitness trackers such as Fitbit and Jawbone.
In 2015, Fitbit became the center of national attention after a women discovered she was pregnant by her heartbeat monitor. After the monitor recorded her heartbeat as twice the normal rate, she went to her doctor who confirmed her pregnancy. Like Target in the earlier example, Fitbit knew the woman was pregnant before she did.
Questions then arose from the database archiving of Fitbit information. Could Fitbit sell this women’s information to big-data warehouses? Could retailers then use this information to market baby items to the woman (perhaps even before she knew she was pregnant)? In short, is it ethical for a company to use data gathered from Fitbit to tailor communication? This type of data archiving is still so new, there doesn’t seem to be an industry ethical standard yet. It is up to public relations employees to advise their clients on the use or rejection of this type of data.
End-user License Agreements
End-user license agreements: Before cookies or other tracking software begins to track a user, the individual must accept an end-user license agreement (EULA). EULAs are commonly written by lawyers to protect the organization and disclose any risk that the user may take on. EULAs are a type of contract between the organization and user that outlines the nature of their relationship and what each party can expect from the other.
Many scholars argue that EULAs are too complicated and complex for the average reader to understand. Users simply agree to the terms and conditions without reading the contract and thoughtfully considering what personal implications it may have. Some researchers even question if these contracts are legally enforceable because of how few users actually read and understand them.
The tough part of EULAs is that they are considered legal protection for the organization, but that does not mean that the practices of the organization are going to be ethical. In fact, it can mean the exact opposite. In 2013, Gamestation, a UK-based electronic games retailer, hid an unethical clause in their terms and service agreement. For one day, the company jokingly amended their contract so that users would agree to give over “their immortal souls.” Although it was intended as a joke, customers who found out protested and boycotted the retailer. Some even petitioned British government for stronger regulations and oversight of the EULA of retailers. Gamestation apologized.
Clearly, Gamestation’s joke was unethical and not legally enforceable, but overall EULAs have some ethical hurdles to overcome. When applying Bowen’s 15 Ethical Guidelines, EULAs complicate our understanding of “be transparent” and “disclose.” EULAs do just those two things. They are transparent and disclosing to a fault. They basically disclose everything, which makes them long, time-consuming, and daunting for everyday users. Some researchers suggest that the length of EULAs is intended to alienate readers and prevent them from trying to read and understand the messages. This creates an opportunity for unethical clauses within the contract (like in Gamestation case).
Public relations practitioners need to help the legal team create EULAs that are manageable and understandable for users, but still comprehensive enough to protect the organization. Some suggested ways to accomplish this is to create visually appealing contracts with graphics, videos, or illustrations that will convey the information. Despite this suggestion, few organizations have changed their EULAs to be more easily read or understood by everyday readers.
Instances like Gamestation’s joke have produced an environment where consumers no longer feel comfortable with the items and clauses of a EULA. Although most readers accept the terms and conditions, public anxiety on the matter has increased tension between organizations and the public. When monitoring the digital reputation of an organization, public relations practitioners need to manage user reactions to EULA’s as well as the tension they create.
Legal Regulations of Digital Ethics
EULAs are not the only legal regulations that affect the relationship between an organization and its digital public. Although regulations vary by country and even state, there are three guiding principles that inform laws and policies on digital public relations practices.
- In general, communication from an organization must reveal the sender or origin of a message. For example, emails sent from a company featuring retail promotions must tell the receiver that they are from the organization. In other words, it should be obvious who sent the email. This helps the user know if the information or links are trustworthy, as well as helps automatic filters organize the message.
- An organization must provide information on how digital user information is stored, protected and used. This information needs to be publicly available, so users can access or check it if they sense there is a problem. Problematically, there are few policies about how clear this information needs to be. Thus, EULAs that contain this information are often too complicated for readers to understand.
- An “opt-out” capability should be provided on all digital communication from an organization including promotional emails, native advertisements or pop-up windows. Again, while a vague policy, users should have the ability to no longer receive communication from an organization at any time. For example, any email you receive from an organization should have an “unsubscribe” button or procedure.
Although these are three general guidelines, there is limited research on how widely these are adopted by organizations. In addition, many scholars question the enforcement of these policies since there is seemingly no police or regulatory force responsible for following up with organizations who do not comply. It is then the task of public relations employees to ensure their communication adopts and consistently uses these three legal guidelines.
Case Study: Yelp Scrubbing Away Bad Reviews
Background
Throughout the early 2000s, customers began posting product, company and media reviews to online websites for other users to base purchase or interaction decisions. These online spaces, called Third Party Review Sites (TPRS) allow users to (sometimes anonymously) post reviews of their recent purchases, retail/service experiences and interactions. Previous literature suggests that the public is more likely to trust reviews on TPRS than traditional advertising or public relations tactics because they are perceived as honest, unbiased and truthful depictions of personal experiences. This makes TPRS a powerful representation of an online brand, company, or organization. This is also why in 2014, organizations that attempted to manipulate, change, and edit their online reviews were heavily criticized.
The term “scrubbing” relates to an organization attempting to “scrub” or remove negative reviews from its profile and replace them with fake positive ones. This artificially raises an organization's average rating and hides bad reviews from the public. Although legal, the act of scrubbing is now seen as highly unethical and questionable by public relations practitioners. Despite the bad reputation of scrubbing, the world’s largest TPRS, Yelp, was accused in 2014 of that exact practice by a group of San Francisco small businesses.
Dilemma
The small businesses claimed that Yelp unfairly approached them demanding payment for electronic advertisements in exchange for scrubbing its profile clean of bad reviews. They argued in the San Francisco Federal Appeals Court that Yelp demanded the small businesses buy expensive online advertisements to artificially inflate the reviews on their online profiles. The group of small businesses claimed that this tactic produced an environment of unfair competition because larger businesses would be able to pay for the service, and small businesses would lack the funds to do so. Therefore, the larger business would have better control over its online presence and reputation, creating an unfair advantage.
Course of Action and Consequences
Yelp flatly denied scrubbing was occurring on its site, calling the small businesses a part of “fringe commenters” who accuse the company of accepting money to remove bad reviews. In its public statement, Yelp accused this “fringe” group of “having an axe to grind” and implying these were unfounded and unethical accusations.
The lawsuit by the San Francisco small businesses was quickly denied in court, and the ruling in favor of Yelp called the tactic “legitimate advertising services.” Although the lawsuit approved of scrubbing as a practice, the public’s reaction when they found out Yelp was scrubbing reviews suggested an ethical boundary was violated with this “legitimate” advertising service.
The San Francisco small businesses created a new platform, Yelp-Sucks.com. In response, the public took notice and began to complain and contribute their own stories of reviews being removed from the site. Users added testimonials and shared the website with a larger audience, accelerating the spread of information about Yelp’s scrubbing. Again, although Yelp denied scrubbing was taking place, the user data on Yelp-Sucks.com provided evidence to the contrary and challenged Yelp’s own reputation.
As Yelp-Sucks.com gained popularity and news of scrubbing spread, Yelp continued to deny the accusations, referencing the site as “fringe” and implying the small businesses were on a mission to get even with Yelp after they lost the lawsuit. The public, who continued to voice outrage on Yelp-Sucks.com gained momentum and lead a mass exodus from the site by hundreds of users who deleted their accounts.
Although Yelp continued to deny scrubbing was taking place, in early 2015, Yelp deleted thousands of reviews very publicly from Dr. Walter J. Palmer (a Minnesota dentist) who was accused of hunting and killing a famous lion in a protected Zimbabwe park. Anti-hunting advocates posted angry reviews of Palmer to Yelp, however the platform quickly removed them all stating they were against the “terms and service agreement” of the site. As the issue gained salience and news coverage, more attention was paid to Yelp’s removal of the protest reviews and history with scrubbing.
In an effort to address the scrubbing crisis, Yelp introduced a “pop-up” notification for profiles that were suspected of scrubbing or paying for fake reviews. Using an algorithm based on the analysis of thousands of fake reviews, Yelp announced it was now launching this new initiative so that users would be more aware when looking at honest versus scrubbed profiles. The pop-up notifications were framed as a service to the public and a way for the platform to protect the integrity of its information.
Although Yelp still denies scrubbing and has introduced initiatives to prevent unethical behavior on its platform, many users and companies still claim that unethical practices are ongoing. In 2014, a class action lawsuit was filed against Yelp that identified 2,000 companies who filed complaints with the Federal Trade Commission. These complaints ranged from unfair competition violations to accusations of false advertising. Rather than deny or directly address these accusations, Yelp adopted a policy of silence, and instead promoted its “pop-up” notification initiative.
Moral of the Story
Yelp continues to deny scrubbing is taking place on its platform, despite recent lawsuits that claim otherwise. Instead, the company portrays its “pop-up” notifications as cutting edge and an innovative approach to preventing dishonest reviews. It is unclear if scrubbing continues to take place on the site or to what extent this has damaged the public’s perception of the brand. Public relations practitioners should learn the harm that denying unethical practices can have from Yelp, and create ethical guidelines that oppose this unethical practice.
Further Readings
Baek, H., Ahn, J., & Choi, Y. (2012). Helpfulness of online consumer reviews: Readers' objectives and review cues. International Journal of Electronic Commerce, 17(2), 99-126. doi:10.2753/JEC1086-4415170204.
Matthews, M. (2016, Feb 4). “Still upset about Cecil the Lion? Mobile group joins worldwide protest.” Alabama News. Retrieved from http://www.al.com/news/mobile/index.ssf/2016/02/still_upset_about_cecil_the_li.html
Sperber, J. (2014). Yelp and labor discipline: How the internet works for capitalism. New Labor Forum, 23(2), 68-74.