Powered by OpenAIRE graph
Found an issue? Give us feedback

INRIA CENTRE GRENOBLE RHÔNE-ALPES

Country: France

INRIA CENTRE GRENOBLE RHÔNE-ALPES

13 Projects, page 1 of 3
  • Funder: French National Research Agency (ANR) Project Code: ANR-10-JCJC-0206
    Funder Contribution: 343,000 EUR
    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-10-JCJC-0207
    Funder Contribution: 115,544 EUR

    The overall goal of the project is to provide representations and algorithms for the real-time navigation, on consumer hardware, in a realistic and plausible virtual Earth model. We target the rendering of terrain, vegetation, water surfaces and clouds (we exclude human artefacts), all highly detailed at all scales from ground to space, with physically based motion and illumination at all scales, and without visible transitions between scales. We do not target the best possible physical accuracy as in radiative transfer models or computational fluid dynamics methods (like for instance in remote sensing, climate modelization, meteorology, etc). Instead, we target visual quality and physical plausibility, i.e., shape, illumination and motion models that look realistic and are efficient enough for real-time applications. These goals are ambitious, as several scientific locks must be unlocked to reach them. Solving these hard problems, even for some specific cases only, would be important scientific breakthroughs: - Scalable shape models are hard to design, especially when they must scale on several orders of magnitude. And providing seamless transitions between scales greatly complicates the problem. These goals have been reached only in few cases. - Scalable illumination models is an even harder problem. Indeed averaging the shapes inside a pixel is much easier than averaging the illumination contribution of all these shapes, which can have different orientation, visibility (due to self occlusions), incoming light (due to self shadowing and inter-reflections) and reflection properties. - Scalable motion models for fluids (water and clouds) is also a hard problem, especially when seamless transitions are needed across several orders of magnitude. Although multi-resolution techniques have been proposed for grid-based and particle-based methods, providing real-time fluid motions on large domains remains difficult. Our results will be published in scientific conferences and journals. We also plan to integrate them in the Proland [Pro09] platform, our Virtual Earth browser prototype that integrates our preliminary results on terrain, atmosphere, ocean and rivers. We made demonstrations of Proland to the public at the « Fête de la science » in 2009, and plan to do it again in the future. We also sold a license of Proland to a planetarium company and we use it in an industrial project for flight simulations. Its source code is not public, and we do not plan to release it as Open Source software. This project involves two academic researchers of the same laboratory who work in neighbor teams, plus students hired on the project (one PhD plus short term students). The project tasks and sub-tasks are well separated and independent. The project management is therefore trivial.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-14-CE24-0030
    Funder Contribution: 286,444 EUR

    The technological advancements made over the past decade now allow the acquisition of vast amounts of visual information through the use of image capturing devices like digital cameras or camcorders. A central subject of interest in video are the humans, their motions, actions or expressions, the way they collaborate and communicate. Analyzing video data of humans, collected for complex real-world events--extracting high-fidelity content, transferring raw data into knowledge--, detecting, reconstructing or understanding human motion are problems of key importance for the advancement of a variety of technological fields, including video coding, entertainment, culture, animation and virtual reality, intelligent human-computer interfaces, protection and security. The visual analysis of humans in real-world environments, indoors and outdoors, faces major scientific and computational challenges however. The proportions of the human body vary largely across individuals, any single human body has many degrees of freedom due to articulations, and individual limbs deform due to moving muscles and clothing. Finally, real-world events involve multiple interacting humans occluded by each other or by other objects, and the scene conditions may also vary due to camera motion or lighting changes. All these factors make appropriate models of human structure, motion and action difficult to construct and difficult to estimate from images. The goal of ACHMOV is to extract detailed representations of multiple interacting humans in real-world environments in an integrated fashion through a synergy between detection, figure-ground segmentation and body part labeling, accurate 3D geometric methods for kinematic and shape modeling, and large-scale statistical learning techniques. By integrating the complementary expertise of two teams (one French, MORPHEO and one Romanian, CLVP), with solid prior track records in the field, there are considerable opportunities to move towards processing complex real world scenes of multiple interacting people, and be able to extract rich semantic representations with high fidelity. This would enable interpretation, recognition and synthesis at unprecedented levels of accuracy and in considerably more realistic setups than currently considered.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-14-CE27-0009
    Funder Contribution: 88,817.2 EUR

    The VIMAD project has two main goals: - (technological) build a robust and reliable perception system, only based on visual and inertial measurements, to enhance the navigation capabilities of fully autonomous micro aerial drones; - (scientific) acquire a deep comprehension of the problem of fusing visual and inertial measurements (from now on the Visual-Inertial structure from motion, VISfM). The perception system will be embedded on micro drones to make them able to safely and autonomously navigate in GPS denied and unknown environments and even to perform aggressive manoeuvres. In particular, with unknown environments, we mean environments that are not equipped with motion capture systems or any external sensor. Perception is still the main problem for high-performance robotics. Once the perception problem is assumed solved, for example by the use of external motion-capture systems, then established control techniques allow for highly performing systems [19,28]. A perception system suitable for a micro aerial vehicle must satisfy sever constraints, due to the small size and, consequently, the low allowed payload. This imposes the employment of low weight sensors and low computational complexity algorithms. In this context inertial sensors and monocular cameras, thanks to their complementary characteristics, low weight, low cost and widespread use, represent an interesting sensor suite. On the other hand, current technologies for navigation only based on visual and inertial sensors have the following strong limitations: - The localization task is achieved via recursive algorithms which need initialization. This means that they are not fully autonomous and, more importantly, they are not robust against any unmodeled event (e.g. system failure) which requires the algorithm to be re-initialized; - They are not enough precise in order to allow a micro aerial vehicle to undertake aggressive manoeuvres and, more in general, to accomplish sophisticated tasks. To overcome these limitations our perception system will be developed by relying on the following three new paradigms: - Use of the closed-form solution to the visual-inertial structure from motion problem introduced in [23,24]; - Exploitation of the information contained in the dynamics of the drones; - Use of the observability tool developed in [22] The first paradigm will allow the perception system to be able to initialize (or reinitialize) the localization task, without external support. In other words, it will make the localization task fully autonomous and robust against any unmodeled event like a kidnapping. Additionally, it can be used to introduce a low-cost data association method. The second paradigm will enhance the perception capabilities in terms of precision. This is important in order to accomplish aggressive manoeuvres. Finally, the third paradigm will allow us both to acquire a deeper comprehension of the VISfM and hopefully to design new and more effective sensor arrangements. This scientific topic deserves in our opinion a deep theoretical investigation since the perception system of most mammals relies precisely on visual and vestibular signals. A deep scientific comprehension of this problem could allow the robotics community to introduce new technologies for navigation. Specifically, we will approach this fundamental problem by proceeding in two main steps. In the former we will investigate an open problem in the framework of control theory, which is the Unknown Input Observability (UIO), namely the observability analysis in the case of unknown inputs; the latter is the use of the results obtained for UIO to investigate the observability properties of the VISfM in the case of missing inertial inputs and eventually to design new sensor arrangements.

    more_vert
  • Funder: French National Research Agency (ANR) Project Code: ANR-11-BS02-0006
    Funder Contribution: 379,765 EUR

    A major goal of Computer Graphics algorithms is to create images of virtual scenes that are as close as possible to what the scenes would look like in reality. This is called photorealistic rendering, and is commonly used in virtual prototyping, e.g., in the automotive industry or in architecture. Such photorealistic rendering implies accurate computation of the effects of light transport and reflection in the scene, which is a very computationally expensive process. The process is expensive partly because illumination behaves in a highly complex manner: in some places, it varies very rapidly, while in others it changes smoothly. However, through a complete and thorough analysis of the equations of light transport, we believe that we can extract information about this behaviour, and use it for more efficient computations. We will analyze light transport using three different approaches: frequency analysis, dimensionality analysis and 1st-order analysis. Each will be used for two different applications of lighting simulation: either offline, high-quality simulation or interactive simulations. The key novelty of our approach stems from the extraction of compact and efficient quantities from the above analysis, which are used to design innovative algorithms, with significant gains in both time and storage. In what follows, we will present examples of both the analysis and its application, e.g., the use of the covariance matrix to encode frequency properties of light transport and its potential to dramatically improve the speed of Monte-Carlo path-tracing.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.