Filippo Menczer is an American and Italian professor of informatics and computer science who is the former director at the Center for Complex Networks and Systems Research, a research unit of the Indiana University School of Informatics, Computing, and Engineering. He holds courtesy appointments in Cognitive Science and Physics, is a founding member and advisory council member of the IU Network Science Institute, a senior research fellow of the Kinsey Institute, and a fellow of the Center for Computer-Mediated Communication, and a former fellow of the Institute for Scientific Interchange in Turin, Italy. In 2013 he was named a Distinguished Scientist of the ACM. Menczer holds a Laurea in physics from the Sapienza University of Rome and a PhD in computer science and cognitive science from the University of California, San Diego. He used to be an assistant professor of management sciences at the University of Iowa, and a fellow-at-large of the Santa Fe Institute. At Indiana University Bloomington since 2003, he served as division chair in the Indiana University School of Informatics and Computing in 2009-2011. Menczer has been the recipient of Fulbright, Rotary Foundation, and NATO fellowships, and a Career Award from the National Science Foundation. He holds editorial positions for the journals EPJ Data Science, Network Science, and PeerJ Computer Science. He has served as program or track chair for various conferences including the International World Wide Web Conference and the International ACM Conference on Hypertext and Social Media. He was general chair of the ACM Web Science 2014 Conference and general co-chair of the NetSci 2017 Conference. Menczer's research focuses on Web science, social networks, social media, social computation, Web mining, data science, distributed and intelligent Web applications, and modeling of complex information networks. He introduced the idea of topical and adaptive Web crawlers, a specialized and intelligent type of Web crawler.Menczer is also known for his work on social phishing, a type of phishing attacks that leverage friendship information from social networks, yielding over 70% success rate in experiments (with Markus Jakobsson); semantic similarity measures for information and social networks; models of complex information and social networks (with Alessandro Vespignani and others); search engine censorship; and search engine bias.The group led by Menczer has analyzed and modeled how memes, information, and misinformation spread through social media in domains such as the Occupy movement, the Gezi Park protests, and political elections. Data and tools from Menczer's lab have aided in finding the roots of the Pizzagate conspiracy theory and the disinformation campaign targeting the White Helmets, and in taking down voter-suppression bots on Twitter.Menczer and colleagues have advanced the understanding of information virality, and in particular the prediction of what memes will go viral based on the structure of early diffusion networks and how competition for finite attention helps explain virality patterns. In a 2018 paper in Nature Human Behaviour, Menczer and coauthors used a model to show that when agents in a social networks share information under conditions of high information load and/or low attention, the correlation between quality and popularity of information in the system decreases. An erroneous analysis in the paper suggested that this effect alone would be sufficient to explain why fake news are as likely to go viral as legitimate news on Facebook. When the authors discovered the error, they retracted the paper.Following influential publications on the detection of astroturfing and social bots, Menczer and his team have studied the complex interplay between cognitive, social, and algorithmic factors that contribute to the vulnerability of social media platforms and people to manipulation, and focused on developing tools to counter such abuse. Their bot detection tool, Botometer, was used to assess the prevalence of social bots and their sharing activity. Their tool to visualize the spread of low-credibility content, Hoaxy, was used in conjunction with Botometer to reveal the key role played by social bots in spreading low-credibility content during the 2016 United States presidential election. Observatory on Social Media (OSoMe): A research project, previously known as Truthy, aimed to study and visualize how information spreads online. Includes data and tools to visualize Twitter trends, diffusion networks, and maps, to create movies, and a public API.Botometer: A machine learning tool to detect social bots on Twitter. Previously known as BotOrNot. Includes public APIs and a social bot dataset repository.Hoaxy: An open-source search engine and network visualization tool to study the spread of articles from low-credibility and fact-checking sources on Twitter. Includes a public API.Fakey: A mobile game for news literacy. Fakey mimics a social media news feed where you have to tell real news from fake ones.Scholarometer: A social tool and API to facilitate citation analysis and help evaluate the impact of an author's publications. By crowdsourcing discipline annotations, this browser extension is able to provide a universal metric to compare impact across disciplines.Kinsey Reporter: A global mobile survey platform to share, explore, and visualize anonymous data about sex and sexual behaviors. Developed in collaboration with the Kinsey Institute. Reports are submitted via Web or smartphone, then available for visualization or off-line analysis via a public API.Other research projects are listed on the Networks & agents Network (NaN) website.