Powered by OpenAIRE graph
Found an issue? Give us feedback

Doteveryone

3 Projects, page 1 of 1
  • Funder: UK Research and Innovation Project Code: AH/S002952/1
    Funder Contribution: 413,306 GBP

    A well-constituted public sphere is essential for political legitimacy. According to our model inherited from Ancient Greece, the public sphere is a social space in which people discuss political problems, ideas, and policy proposals, and get an opportunity to influence political action. In the large democracies of the 20th century, the public sphere was no longer associated with a physical space (an agora); instead, it was primarily constituted by debates in print media, the radio, and television. Today, as news and opinions are increasingly shared on social media and the old media wither or adapt, a new public sphere is being forged. In this new public sphere, traditional roles like 'investigative journalist' or 'news anchor' have lost their former significance. In their place, how often and how quickly a news item or opinion piece is shared has become a key factor affecting the attention it receives. New 'gatekeepers' - social media platforms and their algorithms alongside old-media editors - have altered traditional patterns of inclusion and exclusion. People who would not have sent a letter to The Times can make their voices heard in concert with peers (as in the #BlackLivesMatter and #MeToo campaigns). At the same time 'fake news' (e.g. climate change sceptics, the anti-Obama 'birther' conspiracy theory) gains a wider following, and personalised online content polarises social groups. The problems our project aims to address are that the emergent new public sphere is ineffectively governed by older laws and regulations, and that our moral understanding of new public sphere roles - 'twitter user', 'platform provider' - is under-developed. We will engage in philosophical exploration of three justificatory principles appropriate for governing and conceptualising the public sphere, principles that could underpin a future media policy framework for the UK: 1. Epistemic value principle: the public sphere should institutionalise practices that encourage the acquisition and sharing of knowledge, that filter false beliefs, and that foster responsible engagement with evidence and facts. 2. Liberal self-government principle: the public sphere should respect the liberty of all participants, and should enable them to participate as equals who can together constitute a 'public' that governs itself. 3. Privacy principle: the public sphere should secure an appropriate space for privacy. A first overarching aim is to improve philosophical understanding of these principles, and of how they can work together to shape a well-governed new public sphere. It might seem that a right to free expression, grounded on the liberal self-government principle, must be protected even for contributions (e.g.' climate scepticism', 'fake news') that violate the epistemic value principle because they are blatantly false or fail to engage with available evidence. Similarly, it might seem that the epistemic value principle justifies silencing contributions that have not been channelled through expertise (thereby marginalising many more views than hate speech or 'extremism'), or justifies publicising important privacy-violating truths (e.g. about politicians' families). We examine the new forms of these familiar conflicts. Our second overarching aim is to operationalize our philosophical understanding by developing recommendations for policy-makers, civil society and citizens in the new media age that would, if followed, deliver a legitimating media policy framework. We will explore, e.g. the benefits and costs of regulatory norms for YouTube, Twitter, and Facebook, the case for public service online media platforms alongside old-style public service broadcasting, and the ideals that can define new professional or citizenship roles. Given the lack of fit between the emerging new public sphere and old media policies and concepts, it is pressing to develop a better understanding of what a well-constituted new public sphere should look like.

    more_vert
  • Funder: UK Research and Innovation Project Code: EP/R033633/1
    Funder Contribution: 992,641 GBP

    As interaction on online Web-based platforms is becoming an essential part of people's everyday lives and data-driven AI algorithms are starting to exert a massive influence on society, we are experiencing significant tensions in user perspectives regarding how these algorithms are used on the Web. These tensions result in a breakdown of trust: users do not know when to trust the outcomes of algorithmic processes and, consequently, the platforms that use them. As trust is a key component of the Digital Economy where algorithmic decisions affect citizens' everyday lives, this is a significant issue that requires addressing. ReEnTrust explores new technological opportunities for platforms to regain user trust and aims to identify how this may be achieved in ways that are user-driven and responsible. Focusing on AI algorithms and large scale platforms used by the general public, our research questions include: What are user expectations and requirements regarding the rebuilding of trust in algorithmic systems, once that trust has been lost? Is it possible to create technological solutions that rebuild trust by embedding values in recommendation, prediction, and information filtering algorithms and allowing for a productive debate on algorithm design between all stakeholders? To what extent can user trust be regained through technological solutions and what further trust rebuilding mechanisms might be necessary and appropriate, including policy, regulation, and education? The project will develop an experimental online tool that allows users to evaluate and critique algorithms used by online platforms, and to engage in dialogue and collective reflection with all relevant stakeholders in order to jointly recover from algorithmic behaviour that has caused loss of trust. For this purpose, we will develop novel, advanced AI-driven mediation support techniques that allow all parties to explain their views, and suggest possible compromise solutions. Extensive engagement with users, stakeholders, and platform service providers in the process of developing this online tool will result in an improved understanding of what makes AI algorithms trustable. We will also develop policy recommendations and requirements for technological solutions plus assessment criteria for the inclusion of trust relationships in the development of algorithmically mediated systems and a methodology for deriving a "trust index" for online platforms that allows users to assess the trustability of platforms easily. The project is led by the University of Oxford in collaboration with the Universities of Edinburgh and Nottingham. Edinburgh develops novel computational techniques to evaluate and critique the values embedded in algorithms, and a prototypical AI-supported platform that enables users to exchange opinions regarding algorithm failures and to jointly agree on how to "fix" the algorithms in question to rebuild trust. The Oxford and Nottingham teams develop methodologies that support the user-centred and responsible development of these tools. This involves studying the processes of trust breakdown and rebuilding in online platforms, and developing a Responsible Research and Innovation approach to understanding trustability and trust rebuilding in practice. A carefully selected set of industrial and other non-academic partners ensures ReEnTrust work is grounded in real-world examples and experiences, and that it embeds balanced, fair representation of all stakeholder groups. ReEnTrust will advance the state of the art in terms of trust rebuilding technologies for algorithm-driven online platforms by developing the first AI-supported mediation and conflict resolution techniques and a comprehensive user-centred design and Responsible Research and Innovation framework that will promote a shared responsibility approach to the use of algorithms in society, thereby contributing to a flourishing Digital Economy.

    more_vert
  • Funder: UK Research and Innovation Project Code: ES/V001035/1
    Funder Contribution: 15,033,200 GBP

    IMPACT stands for 'Improving Adult Care Together'. It is a new £15 million UK centre for implementing evidence in adult social care, co-funded by the ESRC and the Health Foundation. It is led by Professor Jon Glasby at the University of Birmingham, with a Leadership Team of 12 other academics, people drawing on care and support, and policy and practice partners - along with a broader consortium of key stakeholders from across the sector and across the four nations of the UK. IMPACT is an 'implementation centre' not a research centre, drawing on evidence gained from different types of research, the lived experience of people drawing on care and support and their carers, and the practice knowledge of social care staff. It will work across the UK to make sure that it is embedded in, and sensitive to, the very different policy contexts in each of the four nations, as well as being able to share learning across the UK as a whole. As it gets up and running, IMPACT will seek to: Provide practical support to implement evidence in the realities of everyday life and front-line services Overcome the practical and cultural barriers to using evidence in such a pressured, diverse and fragmented sector Bring key stakeholders together to share learning and co-design our work in inclusive and diverse 'IMPACT Assemblies' (based in all four nations of the UK to reflect different policy and practice contexts) Work over three phases of development ('co-design', 'establishment' and 'delivery') to build a centre that creates sustainable change and becomes a more permanent feature of adult social care landscape

    more_vert

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.