"We proposed this idea to Twitter and designed an experiment to evaluate it. This project also pioneers an open, evidence-based approach to improving people's experiences online, while protecting their privacy," said Susan Benesch, Faculty Associate at Harvard University and J. Nathan Matias, Post-doctoral Research Associate at Princeton University.
To protect privacy of the users, Twitter will only give the researchers anonymised, aggregated information, which means the experiment would be conducted without the names or other personal information of any account.
"What makes this effort unique is that we have chosen to conduct it as an open collaboration under a set of legal, ethical and scientific constraints," the researchers said.
"This will protect Twitter users, safeguard the credibility of our work, and ensure that the knowledge gained from it will be available for anyone to use - even other internet companies.
"This project also pioneers an open, evidence-based approach to improving people's experiences online, while protecting their privacy," they noted.
The novel project will supply practical knowledge about preventing abuse online, and inspire further transparent, independent evaluations of many other ideas for reducing online abuse.
As part of the research, the team will test abuse prevention ideas. They are also testing a process for independent, outside evaluation of the policy and design interventions that tech companies make.
After the study, Twitter will prepare an aggregated dataset on an encrypted system and share it with the researchers for the analysis.
The findings will be published in an academic journal and will be available to everyone.