
Google UK
Google UK
10 Projects, page 1 of 2
assignment_turned_in Project2023 - 2026Partners:Google UK, OCamlPro, Newcastle UniversityGoogle UK,OCamlPro,Newcastle UniversityFunder: UK Research and Innovation Project Code: EP/X037274/1Funder Contribution: 492,653 GBPOpen-source software development has become an increasingly popular practice. Today's software systems comprise first-party code and third-party dependencies built through a complex supply chain process involving different individuals, organizations, and tools. An attacker can compromise any step in the process by deliberately incorporating vulnerabilities into the code to be triggered at a later stage of the software life cycle. The recent impactful attacks on SolarWinds or Log4j vulnerability are examples of many such rapidly-increasing attacks. In this project, we will lay the foundations of providing provably-secure open-source software - and to prove that it is secure. Information-flow control is a well-known mechanism to reason about confidentiality and integrity. A security property states that there is no illegal information flow, e.g., no secret data is leaked to public channels or no tainted data is ever passed to sensitive sinks. We introduce the concept of security summary, which states when it is secure to use an artifact (i.e., there is no illegal flow) and what are the effects of using the artifact on the security-related behaviour. Security summaries are a conceptually simple form of assume-guarantee reasoning with two key ingredients: (1) a guard, which lists conditions under which using the software is secure, and (2) an effect, which expresses the (security) consequences of using it. While the concept is simple, implementing it is not: the smaller problem is that the software we want to reason about may contain thousands of lines of code, while the larger problem is that it will rely on the use of libraries that have thousands of concepts with millions of lines of code and intricate interplay. The question is more "where to start?" than "how to proceed?", unless we are prepared to be constrained to meaningless toy problems. We will address this question by exploiting the compositional character of summary-based reasoning. Security summaries of methods calls are key to establishing the security summaries of methods that rely on them. In this way, we can reduce the problem of reasoning about the security of a large application into the smaller problems of reasoning about the security of individual small methods and compose their results to establish the security of a large application. It is quite possible to make security assumptions and then trust them. While this makes software reliable only relative to such assumptions, it allows for successively replacing assumptions with certificates (i.e., correct security summaries), or uncertified methods by certified ones. Once such a process is in full swing, certified libraries will become valuable assets for open-source software development, which will bring them into existence purely by the competitive advantage they provide over uncertified ones. The methods we develop will allow for automatically producing correct security summaries and transparently releasing them, so that the code consumer will be able to check and validate the security of a code before reusing it, and also detect any misbehaviour along the supply chain. Security summaries also hold many research challenges. For example, methods may come with a certain degree of nondeterminism, and it is not necessary that all resolutions of this nondeterminism satisfy the desired security guarantees - but we need to find one that does. Similarly, while the pathway from security summaries from called methods to the overall desired property is clear, the reverse way (from our overall goals to requirements on the methods called) provides leeway. We will deliver sharp requirements, which will make it easier to update or replace the method called, because the requirements its replacement has to fulfill are relaxed. Tackling these problems allows us to combine interesting theoretical challenges with practical relevance, that will help produce tomorrow's secure systems.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::078a56135c19438601f528d80604c5b7&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::078a56135c19438601f528d80604c5b7&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2024 - 2026Partners:GlaxoSmithKline (United Kingdom), GSK, The University of Manchester, Verily Life Sciences, Health Data Research UK +2 partnersGlaxoSmithKline (United Kingdom),GSK,The University of Manchester,Verily Life Sciences,Health Data Research UK,University of Manchester,Google UKFunder: UK Research and Innovation Project Code: MR/Y003624/1Funder Contribution: 1,740,160 GBPMany patients and the public are willing, and often keen, to contribute to health research. This is especially true if the research addresses questions of personal importance, and which lead to clear public benefit. Unfortunately, taking part in research is not always easy with practical challenges like travelling for clinic visits during a working week. Smartphones and wearables provide a new opportunity for patients and the public to contribute to health research from the comfort of their home. For patients, this can make participation easier and allows more people to take part. For researchers, it provides an exciting new data source, allowing important questions to be addressed that have previously been unanswerable. Opportunities include measuring things more frequently: for example logging daily symptoms during the pandemic via the Zoe COVID Symptom Study smartphone app allowed us to understand that loss of smell was an important and specific symptom of COVID-19 infection. Consumer technology also allows researchers to measure things more accurately. This might include using sensors within your smartphone or wrist-worn device to measure how your activity is changing in response to your disease, or following a new treatment. This new type of data that comes direct from the public can be even more useful if combined with other data, such as information provided by clinical teams within health records, or genetic blood samples that have been donated for research. The number of successful health research studies using smartphones and wearables, however, remains low. This is because it is new and difficult. It requires research teams to overcome lots of barriers simultaneously. For example, they need to design the study so patients will be interested, can take part simply and easily and remain engaged through time. They need to find the right technology partner to help them understand what can be measured with the device, and how to do that in the best possible way. They need to ensure all the data remains secure as it moves from the device to computer storage ready for analysis. They need to understand how best to analyse and interpret this new continuous stream of data. Our Partnership Grant is bringing together researchers who have all conducted successful studies using smartphones and wearables. It is our intention to pool our experience and share it with the wider research community. We will do this by running a series of events. These will describe studies that have gone well, as well as those that haven't, to share lessons learned. We will host regular online meetings and annual events that will allow the whole community to meet and learn from one another. We will host 'walk-in' clinics providing advice and support about partnership with patients and the public, and how best to conduct research safely and securely. We will support researchers to develop strong bids for future research, and will run annual challenges to improve the way in which we can analyse those continuous data streams. We will share all of this learning at the events, and will also store it online (eg as documents or short videos) for anyone to access any time. We will also run two projects at the cutting edge of Health Research from Home which require data from smartphones and wearables to be linked to other health data: one on understanding patterns of physical activity after knee replacements, and the second on long-term health outcomes of Long COVID. The projects will answer clinically important questions and simultaneously enhance our understanding of how best to conduct such studies. Taken together, we aim to establish new partnerships, build capacity in this important area and advance into new, technically difficult areas. We will develop a skilled and sustainable community who will, in the future, be enable the public to help answer many of those questions that matter to them through the use of their own devices.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::de9b3d0a9cd65152ee32779f29ebbaa6&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::de9b3d0a9cd65152ee32779f29ebbaa6&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2022 - 2025Partners:University of Edinburgh, ASTRAZENECA UK LIMITED, KCL, Actable AI Ltd, Google UK +3 partnersUniversity of Edinburgh,ASTRAZENECA UK LIMITED,KCL,Actable AI Ltd,Google UK,Google UK,AstraZeneca plc,Actable AI LtdFunder: UK Research and Innovation Project Code: EP/V020579/2Funder Contribution: 887,437 GBPNatural language understanding (NLU) aims to allow computers to understand text automatically. NLU may seem easy to humans, but it is extremely difficult for computers because of the variety, ambiguity, subtlety, and expressiveness of human languages. Recent efforts to NLU have been largely exemplified in tasks such as natural language inference, reading comprehension and question answering. A common practice is to pre-train a language model such as BERT on large corpora to learn word representations and fine-tune on task-specific data. Although BERT and its successors have achieved state-of-the-art performance in many NLP tasks, it has been found that pre-trained language models mostly only reason about the surface form of entity names and fail to capture rich factual knowledge. Moreover, NLU models built on such pre-trained language models are susceptible to adversarial attack that even a small perturbation of an input (e.g., paraphrase questions and/or answers in QA tasks) would result in dramatic decrease in models' performance, showing that such models largely rely on shallow cues. In human reading, successful reading comprehension depends on the construction of an event structure that represents what is happening in text, often referred to as the situation model in cognitive psychology. The situation model also involves the integration of prior knowledge with information presented in text for reasoning and inference. Fine-tuning pre-trained language models for reading comprehension does not help in building such effective cognitive models of text and comprehension suffers as a result. In this fellowship, I aim to develop a knowledge-aware and event-centric framework for natural language understanding, in which event representations are learned from text with the incorporation of prior background and common-sense knowledge; event graphs are built on-the-fly as reading progresses; and the comprehension model is self-evolved to understand new information. I will primarily focus on reading comprehension and my goal is to enable computers to solve a variety of cognitive tasks that mimic human-like cognitive capabilities, bringing us a step closer to human-like intelligence.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::aa3dd5965b2137d9d743b687cf438094&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::aa3dd5965b2137d9d743b687cf438094&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2021 - 2022Partners:AstraZeneca plc, University of Edinburgh, ASTRAZENECA UK LIMITED, Google UK, Actable AI Ltd +4 partnersAstraZeneca plc,University of Edinburgh,ASTRAZENECA UK LIMITED,Google UK,Actable AI Ltd,University of Warwick,Actable AI Ltd,Google UK,University of WarwickFunder: UK Research and Innovation Project Code: EP/V020579/1Funder Contribution: 1,269,620 GBPNatural language understanding (NLU) aims to allow computers to understand text automatically. NLU may seem easy to humans, but it is extremely difficult for computers because of the variety, ambiguity, subtlety, and expressiveness of human languages. Recent efforts to NLU have been largely exemplified in tasks such as natural language inference, reading comprehension and question answering. A common practice is to pre-train a language model such as BERT on large corpora to learn word representations and fine-tune on task-specific data. Although BERT and its successors have achieved state-of-the-art performance in many NLP tasks, it has been found that pre-trained language models mostly only reason about the surface form of entity names and fail to capture rich factual knowledge. Moreover, NLU models built on such pre-trained language models are susceptible to adversarial attack that even a small perturbation of an input (e.g., paraphrase questions and/or answers in QA tasks) would result in dramatic decrease in models' performance, showing that such models largely rely on shallow cues. In human reading, successful reading comprehension depends on the construction of an event structure that represents what is happening in text, often referred to as the situation model in cognitive psychology. The situation model also involves the integration of prior knowledge with information presented in text for reasoning and inference. Fine-tuning pre-trained language models for reading comprehension does not help in building such effective cognitive models of text and comprehension suffers as a result. In this fellowship, I aim to develop a knowledge-aware and event-centric framework for natural language understanding, in which event representations are learned from text with the incorporation of prior background and common-sense knowledge; event graphs are built on-the-fly as reading progresses; and the comprehension model is self-evolved to understand new information. I will primarily focus on reading comprehension and my goal is to enable computers to solve a variety of cognitive tasks that mimic human-like cognitive capabilities, bringing us a step closer to human-like intelligence.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::b7d8d7cd21b9dc3d13f2614ab4cc3725&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::b7d8d7cd21b9dc3d13f2614ab4cc3725&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.euassignment_turned_in Project2013 - 2018Partners:Oracle (United States), Oracle (United States), Google UK, Amazon Web Services, Inc., University of Glasgow +6 partnersOracle (United States),Oracle (United States),Google UK,Amazon Web Services, Inc.,University of Glasgow,ARM (United Kingdom),Google UK,Oracle for Research,Advanced Risc Machines (Arm),University of Glasgow,Amazon (United States)Funder: UK Research and Innovation Project Code: EP/L000725/1Funder Contribution: 1,166,420 GBPThe ecosystem of compute devices is highly connected, and likely to become even more so as the internet-of-things concept is realized. There is a single underlying global protocol for communication which enables all connected devices to interact, i.e. internet protocol (IP). In this project, we will create a corresponding single underlying global protocol for computation. This will enable wireless sensors, smartphones, laptops, servers and cloud data centres to co-operate on what is conceptually a single task, i.e. an AnyScale app. A user might run an AnyScale app on her smartphone, then when the battery is running low, or wireless connectivity becomes available, the app may shift its computation to a cloud server automatically. This kind of runtime decision making and taking is made possible by the AnyScale framework, which uses a cost/benefit model and machine learning techniques to drive its behaviour. When the app is running on the phone, it cannot do very complex calculations or use too much memory. However in a powerful server, the computations can be much larger and complicated. The AnyScale app will behave in an appropriate way based on where it is running. In this project, we will create the tools, techniques and technology to enable software developers to create and deploy AnyScale apps. Our first case study will be to design a movement controller app, that allows a biped robot with realistic humanoid limbs to 'walk' over various kinds of terrain. This is a complex computational task - generally beyond the power of embedded chips inside robotic limbs. Our AnyScale controller will offload computation to computers on-board the robot, or wirelessly to nearby servers or cloud-based systems. This is an ideal scenario for robotic exploration, e.g. of nuclear disaster sites.
All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::ae8b5905438fb24309e0c22de62355f4&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eumore_vert All Research productsarrow_drop_down <script type="text/javascript"> <!-- document.write('<div id="oa_widget"></div>'); document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=ukri________::ae8b5905438fb24309e0c22de62355f4&type=result"></script>'); --> </script>
For further information contact us at helpdesk@openaire.eu
chevron_left - 1
- 2
chevron_right