Chorus
Chorus
Sitemap
Chorus
Twitter Linkedin
Chorus
Chorus
Chorus
Chorus
Chorus
Chorus
 

CHORUS+ Network of Audio-Visual Media Search

AVmediasearch.eu is the hub where you will find all useful information related to events, technologies, resources… linked to audio-visual search.


About Audio-Visual search

Chorus
Lines
Chorus Mobile Image Search
Chorus
Chorus Techno-economic and socio-economic analysis of mobile search
Chorus
Chorus Socio-Economic Aspects of Healthcare and Mobile Search
Chorus
Chorus Search Computing: Business Areas, Research and Socio-Economic Challenges
Chorus
Chorus Recommendations about benchmarking campaigns as a tool to foster multimedia search technology transfer at the European level
Chorus
Chorus
I - Main conclusions and lessons learned
Chorus
II - Recommendations
Chorus
III - Annex 1 – Results of CHORUS+ survey

II - Recommendations

Chorus

 

According to the conclusions and lessons learned about multimedia search technology benchmarking, CHORUS+ consortium believes that a more sustainable and efficient way to fund and synchronize benchmarking campaigns in Europe is required. Here is a list of recommendations that go in that direction: 

Ensuring transparency, sustainability and efficiency of

benchmarking campaigns funding.

As long as EU benchmarking campaigns rely on opportunistic and unaccountable funds, efficiency won’t be measurable (by both the funders and the organizers of these campaigns). EU funding for the organization of benchmarking campaigns should therefore be more centralized and conditioned to a clear budget and work plan (for instance through specific calls for projects or through a dedicated EIT service similarly to NIST). The additional cost will be compensated by the reduction of the current costs (that are split over several projects) and by an overall efficiency gain.

  

Ensuring that benchmarking campaigns steering is balanced.

The definition of the challenges measured in benchmarking campaigns has to be done collectively by the research community, the industry and the authorities. It is in particular crucial to keep an important place for innovation diversity by ensuring that new task proposals come from both the research community and the industry. Acceptance mechanisms should also rely on a balanced pool of experts.

 

Ensuring that the costs to participate in benchmarking

campaigns are eligible for funding in EU research projects.

This would help covering the additional engineering costs required to participate in benchmarking campaigns and foster the participation of small organizations (PME’s, small research groups, etc.).

  

Encouraging participation in benchmarking campaigns.

Benchmarking campaigns are a tool to boost technological progress and foster exchanges between industry and academia. They should not be considered as a way to rate companies or research groups. The technical performances measured in these challenges are actually reflecting only partially the scientific excellence of the underlying works or the quality of the tested products. Conditioning funding to benchmarking results should in particular be avoided. The simple fact that an organization participates to a campaign will stimulate results and motivation to go ahead. Successful organizations are free to communicate on their results as an argument of scientific excellence or for advertising their products. Allowing anonymous submissions could be a rather good means to increase participation of companies.  Companies have been reported to defer participation because they fear bad publicity in case of poor results.  Opening up the number and reach of the participants will mechanically foster technology transfer (e.g. :  Companies may take over ideas and algorithms from academic teams ranked better than them).

 

Besides these structural recommendations, CHORUS+ consortium also would like to highlight two key objectives towards improving current practices in benchmarking:

 

Moving to larger and real-world data.

The consequence of too small or too narrow data is that technologies generalize poorly when transferred to real-world content. This gap between the performances measured in benchmarking campaigns and what can be expected at scale-one is weakening technology transfer. Integrating new technologies in large infrastructures without enough guaranties on performances is actually too risky for many industrials.

Allowing user-centric and external evaluations.

System-oriented evaluation metrics used in current benchmarks are essential but not sufficient to cover a vast range of usage of the evaluated technologies. Furthermore, evaluation methodologies are often not scalable because of the huge human work required to build appropriate evaluation data. Complementary to current practices, a good evaluation framework should allow other research groups, companies or even end-users to evaluate a technology with their own criteria or in the context of their own workflow.
  

These two objectives are actually conditioned to more general concerns in the multimedia research community: data openness, availability of large-scale infrastructures and technology sustainability. CHORUS+ recommendations towards achieving these objectives therefore go beyond benchmarking issues but we believe making such recommendations can help converging to solutions:

 

Ensuring data openness in EU projects.

Companies often refuse to share the large data they are using in their scientific publications, sometimes for competitive reasons and sometimes to protect customers’ privacy. On the other side, as big data is becoming an important research area, this practice is criticized by many researchers for its secrecy and the risks of bad science, potential frauds, etc. The problem occurs as well within EU funded projects. Our recommendation is therefore to condition EU funding to some guaranties on data openness, at least for the project’s consortium, and possibly to the research community (typically through benchmarking campaigns).

 

Funding large-scale infrastructures.

Besides privacy and copyright issues, hardware resources and data management problems prevent many research groups from working on real-world and big data. We advocate for setting up a shared infrastructure at the European level adapted to research on information retrieval and data mining. Such infrastructure should allow hosting large-scale multimedia data as well as services developed by research projects (such as the services that could be evaluated in benchmarking campaigns). This could be done in collaboration with major content providers and owners of big infrastructures in Europe.

 

Ensuring sustainability of technologies built within

EU funded projects.

When not exploited commercially, many relevant technologies built within EU projects are lost. New projects often re-develop the same piece of work and this results in a large waste of time and money. Ensuring the sustainability of the technical components developed in EU projects is therefore crucial. Our recommendation is that the developed components should be either commercially exploited or shared (with new EU projects and/or with the research community). A moratorium could be applied for making thinks easier, notably for industrial partners: any results may be locked up for one or two years, but then should be shared if no commercial exploitation occurred. An open infrastructure such as the one discussed previously could make such sharing easier.

Chorus
Shorus
Chorus

THIS WEBSITE HAS BEEN BUILT UPON THE EFFORTS OF ITS PREDECESSOR PROJECT CHORUS

About Chorus+
Partners