Web
Analytics
DL Seminar | Personalized Recommender Systems: Technological Impact and Concerns
top of page
  • Writer's pictureJessie G Taft

DL Seminar | Personalized Recommender Systems: Technological Impact and Concerns

Updated: Aug 18, 2021


Individual reflections by Yan Ji and Daphne Na and(scroll below).



By Yan Ji

Cornell Tech


Today, recommender systems play a significant role in most of our online activities, helping people find contents they might be interested in among the massive information online. While providing enticing promise in automatic pattern extraction and prediction, recommender systems can also have alarming adverse impacts on individual users, content providers and the society. In her inspiring talk entitled “Personalized Recommender Systems: Technological Impact and Concerns”, Amy Zhang, a DLI Doctoral Fellow and PhD Candidate in Operations Research and Information Engineering (ORIE) at Cornell Tech, gave us a high-level overview of common techniques for personalized recommender systems, and how they connect to problems on both the personal and societal level. With some alternative approaches addressing these issues presented, she also pointed out why a solution cannot come from technology alone.


Online recommender systems are like a staple part of our existence nowadays. We get content recommendations on our social feeds, e-commerce sites, ads, searching results, and even auto-completion. Recommender systems are useful not only for filtering down the vast information out there, but also for personalizing recommendations that cater to individuals’ tastes by utilizing analytics to uncover patterns that possibly users are not even aware of.


To uncover the black magic of recommender systems, Amy provided a comprehensible overview of the technical background. First of all, the motivation of a recommender system is to find more relevant content for each individual using the information about what a user found to be interesting in the past. This is inferred from signals, either direct, expressing user sentiments through ratings or feedback, or indirect, such as purchase history or access patterns. The goal of a recommender system is to predict the relevance of a new item for a given user, which is represented through the signal. The performance of the method towards such a given goal is often evaluated through the accuracy of prediction in recovering the signal. There is an underlying assumption, though, that past signals are a good prediction for future preferences. There are several categories of the most common approaches for recommender systems:


1. Content-based recommendation (the most basic form). The approach attempts to find items similar to those users have liked in the past. One issue with this method, however, is over specification, i.e., it only recommends things that are similar to which a user liked in the past and never ventures outside of the domain.


2. Collaborative filtering (CF). This approach mitigates over specification by leveraging quality judgements from other users in making predictions. There are two subcategories:


a. Memory-based CF. This approach is either to find a user that is similar to the given user so that their judgements could be used as a good proxy for prediction (user-user), or to find items that are similar based on users’ interactions (item-item). Although there is no longer an issue of over specification, we may encounter a cold-start problem, i.e., it becomes hard to predict for a new user who lacks enough historical information to find similar peers, or for a new item that lacks existing interactions. In addition, there is a tendency for popular items to be recommended, which is called popularity bias.


b. Model-based CF. Assuming that the preferences are results of some kind of structure or pattern, this approach factorizes the matrix of preferences into two smaller matrices each corresponding to some features. It mitigates the cold-start problem but there is still an issue of popularity bias.


3. Markov decision process (MDP). This is a math model used to formulate a sequential decision problem where a decision has an impact beyond the immediate future. It is also the underlying framework used in reinforcement learning (RL). Unlike previous methods that don’t take into account the dynamic nature of users, items and their interactions, MDP looks at the utility over a longer time horizon and captures the sequential nature of user interactions.


There are many other approaches, each addressing some issues while others don’t. However, problems resulting from the premise of the framework cannot be resolved without adjusting the framework itself. Note that there are different stakeholders other than users here. For example, platforms serve their own monetary interests rather than necessarily serve whatever users like, and providers want their products or contents to be seen as widely as possible. The society is also a stakeholder and the effect of recommendations could easily scale up to impede or promote certain values of society at large. Amy discussed how issues resulting from the premise could, beyond creating ineffective recommendations for individuals, scale up to have concerning societal impacts. She also referred us to a paper on the politics of search engines co-authored by Prof. Helen Nissenbaum, which contains very relevant discussions.


First, content-based and CF have the issue of popularity bias, a tendency to promote popular content. On an individual user’s level, this just causes disappointment of not discovering quality content that may be more obscure. But from the provider’s side, less mainstream content is made even less visible by the system, which is contrary to the goal the system is set out to achieve. This has an impact on a societal level because it's essentially an instance where those with a prominence are further amplified, crowding out smaller voices. Technical alternatives include developing alternative evaluation criterias such as diversity or novelty of trying to branch out and explore to hopefully boost the voice of the smaller and more unique content.


Second, signals could be indirect, so measuring the quality of recommendation based on prediction accuracy of the signal may not be reflective of quality judgments. Clickbait, for example, can be disproportionately propagated because it is perceived as a signal of quality. On a personal level, this leads to frustration of not getting quality content. On a societal level, it creates the wrong incentive, i.e., incentivizing more people to tweak headlines to grab attention thus powering the spread of fake news. In addition, more emotionally charged stories are often favored. There is a rising concern that this is making us more divisive on issues, or even breeding hostility or tribalism. Because the issue is an indirect signal, a lot of platforms are recognizing that and putting in the effort to get more direct feedback right at the source. Another route is human intervention in terms of content moderation.


Third, maximizing immediate engagement in the goal of recommender systems results in a trend in creating attention stickiness, i.e., generating and recommending addictive contents. This could be problematic because it could also promote more opportunities for online fights. On a societal level, this indirectly fuels the rise of multiple issues relating to mental health and general well-being. Some alternatives go back to the root of how you define the measurement for evaluation. Facebook, for example, claimed to have changed the metric to measure what they call meaningful social interaction.


Fourth, the goal being to find contents similar to what users already like, or viewpoints similar to what users think, creates an issue called filter bubbles that strengthen beliefs we already hold. This may obstruct our ability to get enough exposure to different voices. On the other hand, finding similarity of people leads to a problem called echo chambers, i.e., artificially surrounding each of us with those that we agree with, thus reinforcing each other's voice in agreements and further hindering us from seeking for the truth. Because both filter bubbles and echo chambers are sort of routine from the seek for similarity, one technical alternative is using a different metric that goes beyond just accuracy. For example, one interesting metric that people call serendipity is trying to create a sense of diversity or novelty at recommendations.


At the heart of a recommender system, as Amy pointed out, there are questions for individuals and the society to ponder rather than simply leaving up to technology. On a personal level, the act of choosing which information to engage with is quite personal and has close ties to how we view our identities. It’s probably worth asking ourselves whether it’s our free will that chooses the content for us, or the black-box algorithm designed to optimize the service provider’s interests, part of which is to make us more addicted to the platform. Back to the societal level, all of the alternatives above essentially require more human input to think about value judgment deeply, weigh the pros and cons, and seek for the balance. There really is a limit to what technology can solve or even should be responsible for without our engagements. Given the rising demand for transparency in recommender systems after the fake news about the election, a collaborative effort from not only technologists, but also social scientists, policy makers and legal scholars, is in urgent need. And DLI does create such a space for cross-disciplinary discussions and studies.



By Daphne Na

Cornell Tech


Is Technology the Answer?


Whether we are aware of it or not, our daily activities are highly influenced by personalized recommender systems. When we are tying to find a movie to watch on Netflix or browsing through Amazon for a new hand soap to buy, personalized recommender systems are in the works to provide us with the best match that is likely to make us click that continue watching or buy button. The systems look very convenient at first, but I cannot stop wondering – “how are these systems so eerily accurate?” and “what are some alarming adverse impacts and how do we address them?”


Amy Zhang’s presentation on this topic in the Digital Life Research Seminar provided some interesting insights on how our increasingly automated society can deal with problems generated by personalized recommender systems and discussed alternative approaches to addressing these issues. As a PhD Candidate in Operations Research and Information Engineering at Cornell Tech, her research focuses on “approximating large systems by a small number of clusters in settings that can be modeled as Markov Decision Processes (MDP).” Amy applied her research to analyze personalized recommendations in depth.


How do they work?


Content based filtering

The fundamental motivation of personalized recommendations is to filter the content to be more relevant for each individual. The systems’ algorithms make inferences from both direct (ratings, feedback, etc) and indirect (purchase history, access patterns, etc) signals. The goal is to predict the relevance (signal) of a new item for a given user. Based on the data from direct and indirect signals, the content-based filtering systems work to produce outputs that are similar to the user’s previous choices.


Collaborative filtering

Collaborative filtering works with the similarities among users and makes use of quality judgment from others to recommend relevant products. The content-based filtering utilizes User A’s information such as what products they have seen, clicked on, or purchased before …etc to analyze if they would be similar to User B’s behaviors. With collaborative filtering, the system would recommend User B similar items that User A has shown some interests in.


Amy suggests that the current systems are not perfect and there are issues with current filtering methods for personalized recommender systems. For example, the algorithms can only make recommendations based on the already existing preferences of the user. This can result in limiting the space that the users may be interested in. Recommendations may be inaccurate, leading the users to spend more time figuring out what they actually want on their websites. Furthermore, there are privacy issues in utilizing users’ information to feed into personalized recommender systems.


Amy leaves the presentation with an important question to ask ourselves – can technology be the answer to this problem? At the end of the day, it is us, humans, that design these algorithms.


bottom of page