Federated Learning

When I first think of federated learning, what comes to mind is something like a college federated department. For example, the history faculty at NJIT and Rutgers University-Newark are joined in a single federated department offering an integrated curriculum and joint undergraduate and graduate degree programs.

Having worked at NJIT, it made sense to combine the two departments and collaborate. Each had its own specialties but they were stronger together.

In technology, a federation is a group of computing or network providers agreeing upon standards of operation in a collective fashion, such as two distinct, formally disconnected, telecommunications networks that may have different internal structures.

There is also federated learning which sounds like something those two history departments are doing, but it is not. This federated learning is the decentralized form of machine learning (ML).

In machine learning, data that is aggregated from several edge devices (like mobile phones, laptops, etc.) is brought together to a centralized server.  The main objective is to provide privacy-by-design because, in federated learning, a central server just coordinates with local clients to aggregate the model's updates without requiring the actual data (i.e., zero-touch).

I'm not going to go very deep here about things like the three categories (Horizontal federated learning, vertical federated learning, and federated transfer learning). As an example, consider federated learning at Google where it is used to improve models on devices without sending users' raw data to Google servers.

comic
An online comic from Google AI

For people using something like Google Assistant, privacy is a concern. Using federated learning to improve “Hey Google,” your voice and audio data stay private while Google Assistant uses it.

Federated learning trains an algorithm across the multiple decentralized edge devices (such as your phone) or servers that have local data samples, without exchanging them. Compare this to traditional centralized machine learning techniques where all the local datasets are uploaded to one server.

So, though federated learning is about training ML to be efficient, it is also about data privacy, data security, data access rights and access to heterogeneous data.


MORE at analyticsvidhya.com...federated-learning-a-beginners-guide
 

Fake Facebook Accounts

signupNo one is giving Facebook or Meta or Mark Zuckerberg a free pass these days. Criticism is a daily event and their PR people must be in constant firefighting mode. But Facebook has been doing things to protect privacy and security and Facebook I keep hearing ads on podcasts about how they are promoting safety.  Unfortunately, the criticism is usually drowning out the tools they do offer and the actions they are taking. If you are on Facebook, you should be doing your part in protecting your account. Much of online protection is a matter of personal responsibility.

One of the areas that often gets attention is fake accounts. By the end of 2019, Facebook removed a staggering 5.5 billion fake accounts. Plenty of companies would be happy to have that many legitimate accounts but Facebook far exceeds that number. The removals continue and from the fourth quarter of 2017 to the third quarter of 2021 they removed approximately 1.8 billion fake accounts, up from 1.3 billion fake accounts in the corresponding quarter in 2020.

Why would anyone want to make a fake account?

Scammers use fake Facebook accounts to connect with users and then their friends to scrape personal information. That can be used to steal identities. They can also reach out to anyone who's accepted that fake friend request to try and scam them, since this fake account is sending friend requests to all your friends. This is called Facebook cloning.

I have seen a number of my friends' accounts be cloned using a few photos they have made public and any "public" information. When a clone of your account is created using your name you are not the real object of attention. It is actually your friends that are the target and the hope is that your friends will accept the fake friend request.

The cloned account often looks quite bare. Some people immediately recognize that they are already friends with that person/name and know this new request is fake. But if you have many friends or accept requests without looking a bit closer, you can be scammed. If you accept that friend request, the scammer now has access to your friend-visible information and then your friends' accounts' personal information. This access expands rapidly even if not everyone accepts the requests.

To report a fake account, go to  https://www.facebook.com/help/306643639690823

You should also take the initiative to use some tools Facebook offers and do some protecting on your own.

Privacy Checkup - helps you control who can see what you share, how your information is used and how to secure your account.

You see the message if you click next to the ad "Why Am I Seeing This Ad?" You can adjust what ads you see starting with that one.

Off-Facebook Activity is a tool most users don't know about. It lets you control or disconnect the information businesses send to Meta about your activity on other apps and websites.

And you can download or request and export your data so you can move it between services.

Google and YouTube Changing Features for Kids and Teens

social media

Image by Gerd Altmann from Pixabay

With continuing pressure on the big tech companies to protect user privacy, particularly for younger users, Google is introducing updates to YouTube and its search feature aimed at increasing safety for kids and teens on its platforms. The changes include a number of things to give minors more control over their digital footprint and somewhat constrain commercial content for children.

Some of these changes affect their bottom line profits but there is the PR value of making these changes, and I'm sure they hope it will keep the government from regulating or punishing them for awhile.

Google stated that it wants to work with "kids and teens, parents, governments, industry leaders, and experts in the fields of privacy, child safety, wellbeing and education to design better, safer products for kids and teens." 

For YouTube (via their blog blog.youtube/news-and-events/

  • YouTube default privacy settings for users aged 13 to 17 will be the “most private option available” (that only lets content be seen by the user and whomever they choose - teen users can make their content public by changing the default upload visibility setting)
  • YouTube will also start to remove “overly commercial content” from YouTube Kids" (for example, videos that focus on product packaging or encourages children to spend money)
  • YouTube will have “take a break and bedtime reminders" by default for all users 13-17. (Some adults could use that feature!)
  • YouTube will turn off autoplay by default for this age group

There are also changes for other parts of the platform, including search. 

Google will be introducing new policies that allow people under 18, or their parent or guardian, to request the removal of their image from Google Image results. Removing an image from Search doesn’t remove it from the web. They will also be turning on its SafeSearch, which aims to filter out explicit results, for all existing users under 18 and make it the default setting for teens who set up new accounts. The SafeSearch update will be rolled out “in the coming months,” according to Google. 

In other apps, Google will disable location history for all users under 18 without the ability to turn it on. A safety section in Play will show parents which apps follow Google's Families policies and disclose how they use the data they collect in greater detail.

Of course, ads are where Google makes its money, but it will "block ad targeting based on the age, gender, or interests of people under 18." 

 

It Is Way Past the Time to Update the Communications Act of 1996

social media
Image by Pete Linforth from Pixabay

If you have been using the Internet for the past 25 years, you know how radically it has changed. And yet, no comprehensive regulations have been updated since then.

The news is full of complaints about tech companies getting too big and too powerful. Social media is often the focus of complaints. We often hear that these companies are resistant to changes and regulations, but that is not entirely true. 

On Facebook's site concerning regulations, they say "To keep moving forward, tech companies need standards that hold us all accountable. We support updated regulations on key issues."

Facebook may be at the center of fears and complaints, but they keep growing. Two billion users and growing.

There are four issues that address that they feel need new regulations.

Combating foreign election interference
We support regulations that will set standards around ads transparency and broader rules to help deter foreign actors, including existing US proposals like the Honest Ads Act and Deter Act.

Protecting people’s privacy and data
We support updated privacy regulations that will set more consistent data protection standards that work for everyone.

Enabling safe and easy data portability between platforms
We support regulation that guarantees the principle of data portability. If you share data with one service, you should be able to move it to another. This gives people choice and enables developers to innovate.

Supporting thoughtful changes to Section 230
We support thoughtful updates to internet laws, including Section 230, to make content moderation systems more transparent and to ensure that tech companies are held accountable for combatting child exploitation, opioid abuse, and other types of illegal activity.

The Telecommunications Act of 1996 was the first major overhaul of telecommunications law in almost 62 years. Its main goal was stated as allowing "anyone [to] enter any communications business -- to let any communications business compete in any market against any other." The FCC said that they believed the Act had "the potential to change the way we work, live and learn." They were certainly correct in that. But they continued that they expected that it would affect "telephone service -- local and long distance, cable programming and other video services, broadcast services and services provided to schools."

And it did affect those things. But communications went much further and much faster than the government and now they need to play some serious catchup. It is much harder to catch up than it is to keep up. 

 

The New World Normal

newnormal

You are still going to hear more and more this summer and fall about the "new normal" or the "next normal" as we hopefully move out of the pandemic and return to something similar to but not the same as what we called normal in 2019.

Isn't normal always changing? What is normal anyway?

Normality for an individual is when your behavior is consistent with the most common behavior for that person. But normal is also used to describe individual behavior that conforms to the most common behavior in a society. Normal is also at times only recognized in contrast to abnormality.

In schools, we talk about individual student behaviors that are not normal because they contrast with the majority of students. We can talk about an entire college as not following the normal behavior of other colleges.

In March 2021, Rutgers University was the first university I heard announce that all students would be required to be vaccinated in order to be back on campus in September. I am a Rutgers College alum and I was happy to hear the announcement, but it was met with agreement and disagreement immediately. I thought back to when I attended Rutgers in the last century, and to when my sons went off to college in this century. Some vaccinations were required for me and for my sons. The meningitis vaccine was required and is typically given to preteens and teens at 11 to 12 years old with a booster dose at 16 years old. I don't recall any protests about vaccinations for students in K-12 and college being public events before. Typically, vaccinations are recommended for college for measles, mumps, and rubella, meningococcal, human papillomavirus, and influenza.

Talk about "vaccination passports" is a discussion well beyond school campuses.

Great Seal of the United States

I read something online (I'm not linking to it) connecting the post-pandemic normal to the New World Order (NWO), which is a conspiracy theory that hypothesizes a secretly emerging totalitarian world government. It's not a new conspiracy. Believers will point to "evidence" such as the reverse side of the Great Seal of the United States having the Latin phrase "novus ordo seclorum", since 1782 and on the back of the U.S. one-dollar bill since 1935. That translates to "New Order of the Ages." It is generally considered to allude to the beginning of the era where the United States of America became an independent nation-state. Conspiracy theorists claim this is an allusion to the "New World Order."

I also read online that PEW research feels that "A plurality of experts think sweeping societal change will make life worse for most people as greater inequality, rising authoritarianism and rampant misinformation take hold in the wake of the COVID-19 outbreak." That is a harsh prediction. They do add that "a portion believe life will be better in a ‘tele-everything’ world where workplaces, health care and social activity improve."

Their research found that the next normal may "worsen economic inequality as those who are highly connected and the tech-savvy pull further ahead of those who have less access to digital tools and less training or aptitude for exploiting them." They also feel that the changes that may occur will include the elimination of some jobs and enhance the power of big technology firms. Some of that power was growing pre-pandemic through market advantages and using artificial intelligence (AI) in ways that seem likely to further erode the privacy and autonomy of their users.

Likewise, the spread of misinformation online was happening well before the pandemic, so I don't view this as a pandemic-caused issue. Some of the PEW respondents saw the manipulation of public perception, emotion and action via online disinformation as the greatest threat.

The WHO (World Health Organization) is talking about moving from the "new normal" to a "new future" in a "sustainable response to COVID-19." Some of their recommendations about global health could easily be recommendations for global education.

Recognizing that the virus will be with us for a long time, governments should also use this opportunity to invest in health systems, which can benefit all populations beyond COVID-19, as well as prepare for future public health emergencies. These investments may include: 1) capitalizing on COVID-19 enhancements to surveillance, lab, risk communications and other core capacities, 2) back casting to identify gaps and steer resources to future health needs like genetic sequencing and contact tracing with Information Technology, 3) building on COVID-19 innovations to accelerate recovery and address other pressing health problems, and 4) strengthening multi-sector collaboration to improve health services and reduce health inequity.

EDUCAUSE suggests that part of that next normal in education will be improved student engagement through lessons learned during the pandemic. For example, during COVID-19 teaching, breakout rooms emerged as one way to offer environments for collaborative learning. Their use both emerged from perceived "Zoom fatigue" and also contributed to that fatigue depending on how it was implemented.

Globally, writing on the World Economic Forum website suggests that this idea of a New Normal "must not be the lens through which we examine our changed world." Why? One reason is that what we call "normal" has not worked for a majority of the world's population. "So why would it start working now?" Then what should we do? The writer suggests that we "should use our discomfort to forge a new paradigm instead."

Large scale change - a new paradigm - is a lofty goal and what that new paradigm would be is still far from clear. Stay tuned.

What Is a Non-Fungible Token - NFT?

blockchainI read that the American rock band Kings of Leon is getting in on NFTs (non-fungible tokens). They are not the first. I looked into this term which I was not familiar with and found that the artist Grimes sold a bunch of NFTs for nearly $6 million and an NFT of LeBron James making a historic dunk for the Lakers garnered more than $200,000. The auction house Christie's got bids in the millions for the artist Beeple. 

NFT (sometimes pronounced niff-tees)stands for "non-fungible token" meaning a token that you can't exchange for another thing of equal value. Fungibility is the ability of a good or asset to be interchanged with other individual goods or assets of the same type. Fungible assets simplify the exchange and trade processes, as fungibility implies equal value between the assets.One comparison I found said to consider that you can exchange a $20 bill for two $10 bills. They are fungible. But an NFT is one of a kind.

These NFTs are used to create verifiable digital scarcity. They also give digital ownership. They seem to be used with things that require unique digital items like crypto art, digital collectibles, and online gaming.

This goes back to blockchain which has become an established way to provide proof of authenticity. Blockchain gets most of its attention because of its use with cryptocurrencies, like Bitcoin. Ownership is recorded on a blockchain which is a digital ledger.

NFTs use Ethereum, a decentralized, open-source blockchain featuring smart contract functionality. Ether is the native cryptocurrency of the platform. It is the second-largest cryptocurrency by market capitalization, after Bitcoin.

We heard recently that Elon Musk bought a lot of Bitcoin and will accept it as payment for his Tesla vehicles, and other vendors accept cryptocurrencies as payment. But NFTs are unlike cryptocurrencies because you can't exchange one NFT for another in the same way that you would with dollars. Its appeal is that each is unique and acts as a collector’s item that can’t be duplicated. They are rare by design, like limited editions and prints. 

And now, with music, proponents say that NFTs could help artists struggling with digital piracy, low streaming royalty rates and a lack of touring revenue from the last year of Covid-19 pandemic restrictions.

Strong and Weak AI

programming
Image by Gerd Altmann from Pixabay

Ask several people to define artificial intelligence (AI) and you'll get several different definitions. If some of them are tech people and the others are just regular folks, the definitions will vary even more. Some might say that it means human-like robots. You might get the answer that it is the digital assistant on their countertop or inside their mobile device.

One way of differentiating AI that I don't often hear is by the two categories of weak AI and strong AI.

Weak AI (also known as “Narrow AI”) simulates intelligence. These technologies use algorithms and programmed responses and generally are made for a specific task. When you ask a device to turn on a light or what time it is or to find a channel on your TV, you're using weak AI. The device or software isn't doing any kind of "thinking" though the response might seem to be smart (as in many tasks on a smartphone). You are much more likely to encounter weak AI in your daily life.

Strong AI is closer to mimicking the human brain. At this point, we could say that strong AI is “thinking” and "learning" but I would keep those terms in quotation marks. Those definitions of strong AI might also include some discussion of technology that learns and grows over time which brings us to machine learning (ML), which I would consider a subset of AI.

ML algorithms are becoming more sophisticated and it might excite or frighten you as a user that they are getting to the point where they are learning and executing based on the data around them. This is called "unsupervised ML." That means that the AI does not need to be explicitly programmed. In the sci-fi nightmare scenario, the AI no longer needs humans. Of course that is not even close to true today as the AI requires humans to set up the programming, supply the hardware and its power. I don't fear the AI takeover in the near future.

But strong AI and ML can go through huge amounts of data that it is connected to and find useful patterns. Some of those are patterns and connections that itis unlikely that a human would find. Recently, you may have heard of the attempts to use AI to find a coronavirus vaccine. AI can do very tedious, data-heavy and time-intensive tasks in a much faster timeframe.

If you consider what your new smarter car is doing when it analyzes the road ahead, the lane lines, objects, your speed, the distance to the car ahead and hundreds or thousands of other factors, you see AI at work. Some of that is simpler weak AI, but more and more it is becoming stronger. Consider all the work being done on autonomous vehicles over the past two decades, much of which has found its way into vehicles that still have drivers.

Of course, cybersecurity and privacy become key issues when data is shared. You may feel more comfortable in allowing your thermostat to learn your habits or your car to learn about how you drive and where you drive than you are about letting the government know that same data. Discover the level of data we share online dong financial operations or even just our visiting sites, making purchases and our search history, and you'll find the level of paranoia rising. I may not know who you are reading this article, but I suspect someone else knows and is more interested in knowing than me.

Tik Tok and To Tok

bannedA few recent banned apps in the news should be of interest to educational institutions where students may very well be using them - and even some schools themselves may be using them. Here are two summaries from The Newsworthy podcast:

The U.S.. Navy banned TikTok from government-issued smartphones. They say the video-sharing app could be a cybersecurity threat. The Navy didn't expand on the reason, but we do know the U.S. has opened a national security review into TikTok’s China-based parent company ByteDance. TikTok hasn’t commented but has said before it uses U.S. rules.
Read more: Reuters

And then I read that ToTok (not to be confused with TikTok) has been banned from Google and Apple’s app stores. The messaging app appears to have millions of downloads, but the government of the United Arab Emirates allegedly uses it to track locations and conversations. If you have it, experts say you should uninstall it.
Read more: NYTWired

Tracking Your Health Data

fitness trackerThe Verge reported that in another move to gain more of our personal data, Google is teaming up with the nation’s second-largest health system, Ascension, in an effort it’s calling Project Nightingale.

Ascension will share the personal health data of tens of millions of patients with Google’s Cloud division in order to develop a search engine for medical records and other new artificial intelligence services for medical providers. That sounds helpful. But the announcement came right after Google announced it was buying the fitness tracker company Fitbit.

We could assume that Google is interested in selling this kind of hardware, but access to Fitbit user's personal data could be an even bigger and more profitable asset. (Fitbit data has already been used in some serious but not health manners - such as police investigations. )

Google is certainly not alone in wanting to gain this type of personal wellness data and do health care-tech collaborations. Apple would like to see its watch (similar to but more powerful than most fitness trackers) function as a medical-monitoring device. Its health data-sharing capabilities through Apple HealthKit are being enhanced.

All these companies will point out that the data they obtain is anonymized, but there are many examples of anonymized data being reversed so that it is far less anonymous. Are laws and policies ready for all this?

To Build a Surveillance State

If you wanted to build a surveillance state, what would you do?
...you would have wiretap in your home listen to your conversations. You'd have cameras on every door seeing who is coming in and have a  a network of neighbors spying on you... facial recognition capabilities... a system knowing what you read and watch and buy. When you think about it, that's what Amazon offers you. Alexa in our homes is listening. Rings on our door watch and the neighbors' apps are telling on each other. They know what you read through Kindle and what you buy through Amazon and they're pretty good about predictive analytics, so in some ways Amazon is building a very effective surveillance state that we would be offended if the government tried to mandate, but somehow as consumers we seem okay with giving up this information to a private company.

- Andrew Ferguson, author of The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement  in an interview from an interview on Marketplace Tech

Hack Clubs

anonymous hacker

I saw an interesting article about teen hackers who have to convince their parents that what they're doing is good rather than evil.

Wikipedia defines a hacker as a skilled computer expert that uses their technical knowledge to overcome a problem. But while "hacker" can refer to any skilled computer programmer, the term has become associated in popular culture with a "security hacker", someone who, with their technical knowledge, uses bugs or exploits to break into computer systems.

These high-school students are forming hack clubs to solve problems through coding in their schools. In this context, we can define hacking as coding, creating sites and apps, as in hackathons.  The hack clubs are generally student-led after school activities.

The term "white hat" refers to an ethical computer hacker. This computer security expert specializes in penetration testing and in other testing methodologies to ensure the security of an organization's information systems. They hack for good. The term "ethical hacking" is a broader term that means more than just penetration testing.

Following the cowboy movie iconography, the "black hat" is a malicious hacker. I have also seen the blended gray hat hacker described as one who hacks with good intentions but without permission.

I suppose the question that parents of a hacker - and educators and the authorities - might have is whether a young person starting as a white hat might become gray and be drawn to the dark side of black hat hacking.

 

 

Even Facebook Wishes It Could Clear Its History

FacebookThis year it was revealed that a lot more apps are automatically sending data to Facebook. In some cases this happens  even if the user is logged out of Facebook. For Android devices this includes an odd mix that includes Spotify, Kayak, Yelp, Shazam, Instant Heart Rate, Duolingo, TripAdvisor and The Weather Channel.

More recently, a Wall Street Journal study found that apps in Apple's iOS App Store are doing the same thing. In some cases, you have to wonder why the apps are sending personal data on things like like age, body weight, blood pressure, and menstrual cycles. 

Instant Heart Rate: HR Monitor is an app that was sending a user's heart rate to Facebook immediately after it was recorded, and Flo Period & Ovulation Tracker passed on when a user was having her period or when she informed that app about an intention to get pregnant.

Not to exonerate Facebook, but the apps were not "required" to pass that data to Facebook. Part of the blame certainly goes to the app developers for some laziness. Many developers use Facebook's pre-built software development kit (SDK). These pre-built SDKs allow developers to quickly build apps and the SDK will typically transmit most of the data automatically to Facebook.

Actually, Facebook claims that it tells app developers not to send "health, financial information or other categories of sensitive information." Since the WSJ report, they are telling developers of the flagged apps to stop sending that type of information. 

Why would Facebook want that kind of information anyway? It always comes down to targeting advertising. 

Denise Howell's latest free newsletter reminds us that Facebook's mark Zuckerberg had promised last year that there would be a "Clear History" feature that would allow users to check what information applications and websites have shared with Facebook and delete it. So far, it has not been released.

Denise (a well known lawyer due to her podcasting and social media presence) says:

It hasn’t happened yet, but the FTC is expected to impose a record-breaking fine against Facebook resulting from the company’s failure to comply with a 2011 consent order aimed at privacy violations that took place over eight years ago. In the ensuing eight years, Facebook’s privacy record hasn’t exactly been pristine. Accordingly, EPIC, Common Sense Media, and others think Facebook should be fined in excess of $2 billion. Jason Kint told Vice Media, “[a] fine almost certainly would not be enough to change Facebook’s behavior — we’re past that,” and I’m inclined to agree with him. For example: even after all the outrage against and scrutiny of Facebook over the past year, if you as a Facebook user want to make all your past posts private, viewable only to you, and if you want to do this all at once (as opposed to one post at a time; which is possible but who does that), you simply can’t. This is true even though Facebook actually provides a batch feature to limit the visibility of past posts; it just limits the ability to limit, which ends at “Friends.” (Let s/he here who hasn’t over-friended on Facebook cast the first stone.) If Facebook remains tone-deaf to this unfathomable extent, then perhaps it does need more than a record-breaking fine to encourage it to course-correct. Oh, and that “Clear History” tool Zuck announced at F8 last year? The one that was supposed to let people delete Facebook’s record of what they’ve clicked, Web sites they’ve visited, and other information Facebook gets from sites and apps using FB’s ads and analytics, and was ALSO supposed to let people turn off FB’s collection of their browsing history? Yeah, that was last May, and “Clear History” is nowheresville. So, what’s a lawmaker to do?

Working in public relations for Facebook must be a tough job these days. Clear your history, indeed.

In looking back at my own posts about Facebook, I found one from March 2006 in which I said "So You Think Facebook Is a Waste." Thirteen years ago the idea of social media was treated by many as a fad. Facebook was a two-year old site but was alreday the seventh-most heavily trafficked site on the Internet with 5.5 billion page views. It was threatening enough as a business that Rupert Murdoch's News Corporation bought Facebook''s only competitor ta the time - MySpace. There an entire chunk of the younger population who never even heard of MySpace, which in 2005 sold for $580-million. Not a good investment, but who knew because the site still had more than 37 million unique visitors in February 2006 with 23.5 billion page views. It was the second-most trafficked site after Yahoo beating Google. 

How things have changed.

Ethical Tech

Reading the latest newsletter from Amber Mac a topic that caught my education eye is ethical tech. Hope educational use of tech is always stressing ethical use, but is this also a topic that is being taught? 

At the end of 2018, The New York Times posted an article titled, "Yes, You Can Be an Ethical Tech Consumer. Here’s How" by Brian Chen, which notes that products that we enjoy continue to create privacy, misinformation and workplace issues. That article makes some recommendations, ranging from Boycott and Shame (not so radical if you consider the 2018 #DeleteFacebook campaign that I don't think was all that successful) to paths that mean we Give Up Convenience for Independence - something that is as easy as fulfilling that resolution to diet and exercise.

Of course, I am on the side of educating the public and our students at all grade levels about the ethical use and applications of technology. Students are largely consumers of the tech, but they will be the creators. Did Mark Zuckerberg ever have an courses or lesson on the ethical use of technology?

I know that at NJIT where I taught, there were a number of courses that touch on ethical issues. In the management area, "Legal and Ethical Issues: Explores the legal and ethical responsibilities of managers. Analyzes extent to which shareholders should be allowed to exercise their legitimate economic, legal, and ethical claims on corporate managers; extent of regulation of a particular industry, individual rights of the employee and various corporate interests, and corporate responsibility to consumers, society, and conservation of natural resources and the environment." Of course, you have to get to the graduate level for that course.

In my own humanities area of Professional and Technical Communication, we started in the foundation courses in addressing ethics in communications - but it is only one topic in a busy curriculum along with usability analysis, visual information; global diversity and communication concerns and communicating with new technologies.

In computer science, "Computers, Society and Ethics" is a 300 level course that examines the historical evolution of computer and information systems and explores their implications in the home, business, government, medicine and education. The course includes discussions of automation and job impact, privacy, and legal and ethical issues. Obviously, ethical use needs to be a part of many courses at a science and technology school, as well as being the subject matter of entire courses.

AmberAmber says in her newsletter, that looking ahead "We will also continue to see social responsibility expand beyond the consumer. For example, let's think about investment dollars into new technologies. In the US alone, according to PitchBook, venture capital investment in US companies hit $100B in 2018. If we dig into these dollars, there are very few memorable headlines about ethical investments, but that is bound to change - especially as executives at large tech companies set new standards.

Engineers, designers, technical communicators and managers need to be better prepared for the world they are entering professionally. I proposed a course at NJIT on Social Media Ethics and Law that has yet to be approved or offered.

Amber continues that in terms of momentum on this ethical use  in companies, she points to software giant Salesforce as a leader. CNBC reported, the company will have its first Chief Ethical and Humane Use Officer in 2019. And she points to a company that prides itself on being ethical and sustainable, Patagonia, as being "the north star of ethical business practices" and suggests that tech CEOs like Mark Zuckerberg should take a long look at Patagonia's many years of dedicated corporate responsibility. Patagonia announced they will donate the $10M the company saved via GOP tax cuts to environmental groups. Amber points out that Patagonia has a long history of providing consumers with access to their supply chain footprint and she asks if that might be the kind of thing that Gen Z may demand from the companies from whom they purchase. They might - if they are properly educated on the ethical use of technology.

Data Protection and Privacy - Europe and the U.S.

If you had a meeting and Apple CEO Tim Cook gave a speech and he was followed by Sir Tim Berners-Lee, and after the lunch break Facebook's Mark Zuckerberg and Google's Sundar Pichai were on the screen giving video messages, you would consider this to be a pretty high-powered meeting.

That was the lineup for some European data regulators at the 40th International Conference of Data Protection and Privacy Commissioners, held this year in the European Parliament in Brussels.

I saw part of it on a recent 60 Minutes. Tim Cook talked about the "crisis" of "weaponized" personal data. It's not that Apple doesn't collect data on its users, but companies like Facebook and Google rely much more on user data to sell advertising than hardware-based Apple.

The focus in that segment is on Europe where where stricter laws than in the U.S. are already in place. Of course, they affect American companies that operate in Europe, which is essentially all major companies.

Multi-billion dollar fines against Google for anti-competitive behavior re in the news. The European Union enacted the world's most ambitious internet privacy law, the General Data Protection Regulation (GDPR).

Tim Cook said he supports the law, but Jeff Chester, executive director of the Center for Digital Democracy, says that "Americans have no control today about the information that's collected about them every second of their lives." The only exception is some guaranteed privacy on the internet for children under 13, and some specific medical and financial information.

This is an issue that will be even more critical in the next few years. Since GDPR was passed, at least ten other countries and the state of California have adopted similar rules. And Facebook, Twitter, Google, and Amazon now say they could support a U.S. privacy law. Of course, they want input because they want to protect themselves.

 

Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World

Data Privacy Law: A Practical Guide

Fifty-Two Thousand Data Points

data abstractionFacebook has had a tough year in the press and with its public face (though its stock is holding up fine). There has been a lot of buzz about hacks and data being stolen and fake news and Senate hearings and general privacy concerns. All of these are legitimate concerns about Facebook and about other social media and e-commerce and financial site too.

But how much does Facebook really know about a user? There is the information you willingly provided when you joined and all the things you have given them by posting and clicking Likes and other interactions. Though that volunteered data is often overlooked by users, there is more concern about data you have not knowingly given them but have access to anyway.

I do believe that Facebook is more focused now on privacy and user experience, it needs that data to be a profitable public company. (Disclaimer: I am a Facebook stockholder - though not in a very big way.)

Facebook is free but, as Mark Zuckerberg had to explain to at least one clueless Senator this past summer, it sells advertising to make a profit. Ad sales are more valuable to companies when they know who they are advertising to, and the more granular that audience can be, the better it is for them and Facebook. It might even be better for you. That is something Google, Amazon, Facebook and many others have been saying for years: If you're going to get ads anyway, wouldn't you rather that they be relevant to your likes and interests?

According to one online post, it you total up what Facebook can know about a user, it comes to roughly 52,000 traits. That comes from three key algorithms. One is DeepText, which looks into data, much of which coming from commercial data brokers. They also use DeepFace, which can identify people in pictures and also suggest that you tag people in a photo.

The third algorithm is FB Learner Flow, which might be the most clever of all. It focuses on the decisions you have yet to make. Using predictive analytics, it decides which ads should be shown to you that you would be likely to click and even purchase a product.

Amazon will allow you to let it send out products before you order them based on your previous orders and usage. This is not difficult to predict. My pharmacy will tell me it is time to reorder a prescription and even process and deliver it without my input. That is not so predictive; my 30 daily pills will run out in 30 days.

When Amazon suggests that I might like a product similar to other things I have bought, it's not very creepy. When I see an ad or suggestion from them about a product or even a topic that I was just searching on Google, THAT is creepy.

Similarly, Facebook might give me an ad or a "sponsored" post at the top of my feed because of my recent activity and the activity of friends that I follow - especially those that I interact with frequently with Likes, shares and comment. 

It would be interesting to see what the feeds look like for some friends of mine who are Facebook lurkers and who rarely post anything and seem to rarely even log into the site. What are they seeing when it comes to ads?