Loading…
Tuesday October 29, 2024 11:20am - 12:20pm EDT
Analyzing Microtargeting on Social Media
Tunazzina Islam (Purdue University), Dan Goldwasser (Purdue University)
The landscape of social media is highly dynamic, with users generating and consuming a diverse range of content. Various interest groups, including politicians, advertisers, and stakeholders, utilize these platforms to target potential users to advance their interests by adapting their messaging. This process, known as microtargeting, relies on data-driven techniques that exploit the rich information collected by social networks about their users. Microtargeting is a double-edged sword; while it enhances the relevance and efficiency of targeted content, it also poses challenges. There is the risk of influencing user behavior and perceptions, fostering echo chambers and polarization. Understanding these impacts is crucial for promoting healthy public discourse in the digital age and maintaining a cohesive society. Our work focuses on developing an organizing framework for a better understanding of microtargeting and activity patterns in social media on contentious topics. In this tutorial, we will present the challenges we face in our work and how we address these challenges by developing computational approaches for (1) characterizing user types and their motivations for engaging with content, (2) analyzing the messaging based on topics relevant to the users and their responses to it, and (3) delving into a deeper understanding of the themes and arguments involved in the content. We dive into the cutting-edge realm of social media microtargeting, a strategy that powerfully shapes public discourse, in our engaging tutorial at ADSA 2024. This tutorial, tailored for researchers and practitioners, offers computational methods, NLP tools, and analytical frameworks to explore online messaging dynamics.


Open RL Benchmark: Comprehensive Tracked Experiments for Reinforcement Learning
Md Masudur Rahman (Purdue University)
In many Reinforcement Learning (RL) papers, learning curves are useful indicators for measuring the effectiveness of RL algorithms. However, the complete raw data of the learning curves are rarely available. As a result, it is usually necessary to reproduce the experiments from scratch, which can be time-consuming and error-prone. We present Open RL Benchmark, a set of fully tracked RL experiments that include not only the usual data, such as episodic return, but also all algorithm-specific and system metrics. Open RL Benchmark is community-driven: anyone can download, use, and contribute to the data. At the time of writing, more than 25,000 runs have been tracked, for a cumulative duration of more than eight years. Open RL Benchmark covers a wide range of RL libraries and reference implementations. Special care is taken to ensure that each experiment is precisely reproducible by providing not only the full parameters but also the versions of the dependencies used to generate it. Additionally, Open RL Benchmark comes with a command-line interface (CLI) for easily fetching and generating figures to present the results. In this document, we include two case studies to demonstrate the usefulness of Open RL Benchmark in practice. To the best of our knowledge, Open RL Benchmark is the first RL benchmark of its kind, and we hope that it will improve and facilitate the work of researchers in the field.
Tuesday October 29, 2024 11:20am - 12:20pm EDT
Hussey The Michigan League

Attendees (1)


Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link