Visual Information Processing and Protection Group
CS addresses the problem of collecting a number of measurements smaller than the one required by Shannon‘s theorem, but sufficient to allow the reconstruction of sparse signals with an arbitrarily low error. A promising application of CS theory regards the remote acquisition of hyperspectral imagery for satellite earth observation. This project aims at reducing the computational complexity of the reconstruction algorithm, allowing the application of CS to larger spatial/spectral blocks, and improving its performance. This will result in a better trade-off between the number of detectors and the speed of the light modulator/detector, without sacrificing reconstruction quality. The problem of sensor calibration will be investigated in the context of CS.
The project focuses on the development of new techniques for multi-clue forensics analysis that, starting from the indications provided by a pool of forensics tools thought to detect the presence of specific artifacts, reach a global conclusion about the authenticity of a given image. The to-be-developed techniques will have to operate in highly non-structured scenarios (e.g. to analyze images downloaded form the Internet) characterized by imprecise and incomplete information.
The MAVEN project will focus on the development of a set of tools for multimedia data management and security. More specifically, MAVEN objectives will be centered on two key concepts: "search" and "verify", integrated in a coherent manner; MAVEN will first search for digital contents containing "objects"of interest (e.g. a face appearing on CCTV, or a specific logo appearing in the news). Once those objects are retrieved, MAVEN will apply advanced forensic analysis tools to very their integrity and authenticity.
With the rapid proliferation of inexpensive acquisition and storage devices, multimedia objects can be easily created, stored, transmitted, modified and tampered with by anyone. During its lifetime, a digital object might go through several processing stages, including multiple analog-to-digital (A/D) and digital-to-analog (D/A) conversions, coding and decoding, transmission, editing. The REWIND project is aimed at synergistically combining principles of signal processing, machine learning and information theory to answer relevant questions on the past history of such objects.
The vision inspiring LivingKnowledge is to consider diversity an asset and to make it traceable, understandable and exploitable, with the goal to improve navigation and search in very large multimodal datasets (e.g., the Web itself). LivingKnowledge will study the effect of diversity and time on opinions and bias, a topic with high potential for social and economic exploitation. We envisage a future where search and navigation tools (e.g., search engines) will automatically classify and organize opinions and bias (about, e.g., global warming or the Olympic games in China) and, therefore, will produce more insightful, better organized, easier-to-understand output.
LivingKnowledge employs interdisciplinary competences from, e.g., philosophy of science, cognitive science, library science and semiotics. The proposed solution is based on the foundational notions of context and its ability to localize meaning, and the notion of facet, as from library science, and its ability to organize knowledge as a set of interoperable components (i.e., facets). The project will construct a very large testbed, integrating many years of Web history and value-added knowledge, state-of-the-art search technology and the results of the project. The testbed will be made available for experimentation, dissemination, and exploitation.
The overall goal of the LivingKnowledge project is to bring a new quality into search and knowledge management technology, which makes search results more concise, complete and contextualised. On a provisional basis, we take as referring to the process of compacting knowledge into digestible elements, completeness as meaning the provision of comprehensive knowledge that reflects the inherent diversity of the data, and contextualisation as indicating everything that allows us to understand and interpret this diversity.
We will further explore requirements and evaluate our progress by applying the technology we develop to the LivingKnowledge testbed. The LivingKnowledge testbed will contain a large amount of timed, diverse and biased knowledge constructed in many years of Web evolution (provided by the European Archive), and it will be enabled by state of the art search technology (provided by Yahoo!).
Distributed Source Coding (DSC) addresses the problem of the separate coding of two or more correlated sources of information, while reaching the same performance of a joint coder. Although the possibility of achieving such a surprising result has been proved theoretically about 30 years ago, its practical application to real data has been considered only recently. The aim of this project is to apply the DSC concepts to two remote sensing scenarios. The first scenario considers the compression of hyperspectral images on board of a satellite platform. The bands of the hyperspectral image are seen as a set of correlated sources that should be coded jointly in order to achieve the theoretically achievable performance predicted by Shannon. Due to the necessity of keeping the complexity of the satellite platform as low as possible, it is mandatory that simple compression algorithms are used, hence making the joint coding of image bands difficult to achieve. By applying the DSC paradigm, the compression efficiency of a joint coder can be achieved by a much simpler band-by-band coder. The reduced complexity of the encoder is paid by increasing the complexity of the decoder. However, this is not a problem, since image reconstruction is usually performed at the ground station, for which no particular computational constraint exists. The second scenario regards the compression of an image on board of a satellite, when a similar image already exist in the archive of the ground station. This is typically the case of a series of images framing the same scene acquired at different time instants. In this case the project will investigate the possibility of applying DSC to transmit only the “innovation” of the new image with respect to the images available at the decoder. Significant advantages in terms of efficient exploitation of the downlink channel are expected so to increase the amount of data transmitted from the satellite to the ground stations.
ECRYPT is a 4-year network of excellence funded within the Information Societies Technology (IST) Programme of the European Commission's Sixth Framework Programme (FP6) under contract number IST-2002-507932. It falls under the action line Towards a global dependability and security framework. ECRYPT was launched on February 1st, 2004. Its objective is to intensify the collaboration of European researchers in information security, and more in particular in cryptology and digital watermarking.
The goal of SPEED is to foster the advancement of the marriage between Signal Processing and Cryptographic techniques, both at theoretical and practical level. The objective is the initiation and development of a totally new and unexplored interdisciplinary framework and technologies for signal processing in the encrypted domain (s.p.e.d.). As a result, entirely new solutions will emerge to the problem of security in multimedia communication/consumption, and digital signal manipulations.
Most of technological solutions proposed so far to cope with multimedia security simply tried to apply some cryptographic primitives on top of the signal processing modules. These solutions are based on the assumption that the two communicating parties trust each other, so that the encryption is used only to protect the data against third parties. In many cases, though, this is not the case.
Recently, some pioneering works addressing a few scattered scenarios have pointed out that a possible solution to the above problems could consist in the application of the signal processing modules in the encrypted domain. It is the aim of SPEED to foster the birth of the new s.p.e.d. discipline and to demonstrate its ability to provide solutions to the call for security stemming from some selected application scenarios, including multimedia security and privacy-preserving access to sensitive contents.
SPEED research activity will be carried out both at a theoretical and a practical level, the theoretical part being dedicated to the development of a general framework investigating the fundamental limits and trade-offs of s.p.e.d., and the practical part being devoted to the development of some of the basic s.p.e.d. building blocks and to their application in some selected scenarios. The project will end with the validation of the proposed solutions by means of a demonstrator.
The objective of this activity is to develop a lossy compression algorithm based on DSC (Distributed Source Coding), using DSC to boost the performance of 2D compression schemes ideally to that of 3D schemes, without complicating the encoder. The algorithm shall be tested on real-world hyper-spectral images such as those yielded by the AVIRIS sensor.
The performance of the compression algorithm developed within this project has to be compared with that of state-of-the art technology. In particular, efficient 2D and 3D lossy compression algorithms must be identified and used as relevant benchmarks. Since the proposed activity targets improvements in terms of both complexity and compression efficiency, high-performance techniques and low-complexity techniques have to be considered in selecting a set of benchmark techniques.
The actual use of digital watermarking in real applications is impeded by the weakness of current available algorithms against common signal processing manipulations and intentional attacks to the watermark. In this context no effective solutions exist to cope with very simple manipulations leading to the desynchronization of the watermark embedder and detector. This project aims at filling this gap, first of all by analysing the desynchronization problem at a very general and theoretical level, by exploiting some new information theoretic tools, then by testing the theoretical analysis on synthetic and real data, and finally by developing a new class of watermarking algorithms that are robust against desynchronization.
Processing sensitive information like biometric or biomedical signals in a non-trusted scenario, while ensuring that the privacy of the involved parties is preserved, requires that new tools and solutions are developed. In this project we investigate the possibility of processing signals in the encrypted domain for privacy-aware treatment of sensitive information. By relying on advanced cryptographic primitives like homomorphic encryption, multiparty computation and zero knowledge protocols, we will analyse the possibility of developing secure signal processing primitives like linear transforms, scalar products or FIR filters capable of operating on encrypted data. The developed signal processing primitives will be assembled into a set of basic pattern recognition tools (e.g. neural networks or classifiers) forming the basis for the analysis and interpretation of encrypted signals. At an even higher level, the pattern recognition primitives will be applied to practical scenarios involving the treatment of biometric signals, like face or iris images, or other kinds of sensitive data, like biomedical signals. The requirements stemming from the application level, including those raised by the current privacy regulation, will be considered, so to cast the activity into a practical set up. The architectural and data flow constraints will be considered as well, so to encompass all the levels of the addressed scenario.
The actual use of digital watermarking in real applications is impeded by the weakness of current available algorithms against common signal processing manipulations and intentional attacks to the watermark. In this context no effective solutions exist to cope with very simple manipulations leading to the desynchronization of the watermark embedder and detector. This project aims at filling this gap, first of all by analysing the desynchronization problem at a very general and theoretical level, by exploiting some new information theoretic tools, then by testing the theoretical analysis on synthetic and real data, and finally by developing a new class of watermarking algorithms that are robust against desynchronization.
Centrica is a company founded in 1999. It offers services and products focused on web, imaging and multimedia areas for Public Administration and companies. Moreover, Centrica is focusing more and more as a service and product provider of innovative technologies in digital and Internet imaging. In this scenario, Centrica has developed XLmark®, a proprietary watermarking solution embedded in XLimage® to protect the copyright of images. The aim of this project is to develop a new watermarking technique that improve the robustness of the current algorithm. At the end of the project a new release of XLmark® will be upgraded in all image management products of Centrica.
Other projects funded by Centrica:
Watermarking techniques for protection ad authentication of remote sensing images accessible through public and private data trasmission networks.
Analysis of protection techniques of multimedial data and realization of a watermarking software for images to be connected to the digital-terrestrial television system scheduled for D3T project.
Recent signal processing results show that it is possible to perform signal acquisition employing sampling frequencies well below those dictated by Shannon`s sampling theorem. This new theory, called "compressive sampling", applies to signals that exhibit some correlation, as most natural signals and images, and paves the way for the development of new satellite imaging systems that can provide significantly improved resolution without increasing the number of detectors. This project addresses the design and proof-of-concept of compressive sampling for satellite imaging, with the objective of defining suitable sampling strategies, and assessing the quality of reconstructed images.
Set up a free web page with Mobirise