Chorus
Chorus
Sitemap
Chorus
Twitter Linkedin
Chorus
Chorus
Chorus
Chorus
Chorus
Chorus
 

CHORUS+ Network of Audio-Visual Media Search

AVmediasearch.eu is the hub where you will find all useful information related to events, technologies, resources… linked to audio-visual search.


About Audio-Visual search

Chorus
Lines
Chorus Mobile Image Search
Chorus
Chorus Techno-economic and socio-economic analysis of mobile search
Chorus
Chorus Socio-Economic Aspects of Healthcare and Mobile Search
Chorus
Chorus Search Computing: Business Areas, Research and Socio-Economic Challenges
Chorus
Chorus Recommendations about benchmarking campaigns as a tool to foster multimedia search technology transfer at the European level
Chorus
Chorus
I - Main conclusions and lessons learned
Chorus
II - Recommendations
Chorus
III - Annex 1 – Results of CHORUS+ survey
Chorus
Questionnaire

Questionnaire

Chorus

PART1 of the questionnaire

 

Using benchmarking for technology

transfer


 

Q1.1 Which aspects are likely to contribute to the commercial success of a technical component?

 

Overall results

 

Detailed results

 

Big comp

Small comp

Develop

Academic

Manager

Student

Top-1

Technical performance

Technical performance

 

Technical performance

 

Technical performance

 

Technical performance

 

Technical performance

 

Top-2

Nb & Quality of functions

 

Ease of integration

 

 

Ease of integration

 

Ease of use

 

Ease of use

 

Ease of use

Worst

Commercial terms

Scientific excellence

Scientific excellence

 

Scientific excellence

 

Commercial terms

 

Scientific excellence

 

 

 

The results show that according to all groups of respondents, technical performances is the most important aspect that contributes to the commercial success of a technical component. Ease of integration is an important aspect for small companies and developers in general.

 

Interestingly commercial terms appear to be the less contributive aspect for big companies whereas reciprocally, scientific excellence is considered as the less contributive aspect by academics.

 

Scientific excellence is overall the less contributive factor to commercial success.

Q1.2 How do you identify new technical components that you would like to experiment and/or benchmark?

 

Overall results

 

Detailed results

 

Big comp

Small comp

Develop

Academic

Manager

Student

Top-1

Scientific articles

Scientific articles

 

Scientific articles

Scientific articles

Scientific articles

Scientific articles

Top-2

Benchmarking, Competitors

==

 

 

==

Benchmarking

 

Benchmarking

Recommend, conferences

Worst

Press & blogs

 

Competitors

Competitors

Press & blogs

Competitors

Competitors

 

The results show that according to all groups of respondents, scientific articles are the best way to identify new technologies, followed by benchmarking campaigns.

 

Interestingly, watching competitors is a good source of information according to big companies whereas small companies and academics ranked it the worst criterion.

 

Q1.3 What criteria do you use for selecting technical components (for experimentation, proof of concept or integration in products)?

 

Overall results

 

Detailed results

 

Big comp

Small comp

Develop

Academic

Manager

Student

Top-1

Adequacy user needs

Technical skills

Technical skills

Scientific impact

Adequacy user needs

 

Scientific impact

Top-2

Technical skills, Benchmarking results

Benchmarking results

 

 

Adequacy user needs

Benchmarking

results

Benchmarking results

Benchmarking results

Worst

Novelty

 

Security

Purchase price

Security

Security

Security

 

Overall results show that benchmarking campaigns are on average the best criteria to select new technical components for integration or deeper testing (whereas in previous question scientific articles were judged as the best way to identify/discover new components). But it is important to notice that benchmarking campaigns are ranked as the second best criteria by almost all groups when looking at the detailed result’s table. Top-1 criteria for academics is actually scientific impact whereas the top-1 criteria for companies is technical skills (e.g. scalability, response times, portability, etc.). In between, benchmarking appears as the best compromise between research (technology suppliers) and exploitation (technology integrators).

 

 

PART 1 Synthesis

- Scientific literature is the best way to prospect and discover new technologies

- Technical performances are the best key of commercial success whereas scientific excellence is judged as the worst one

- Academics & Companies differ on how they select technologies in practice (for integration or testing): scholarly impact vs. technical skills

- But Academics & Companies agree on that Benchmarking is a good way to select technologies in practice. So that benchmarking appears as the best compromise between research and exploitation. This central position makes it a powerful tool for boosting technology transfer.

 

 

 

PART2 of the questionnaire

 

Public Evaluation campaigns

 

 

Q2.1 Which evaluation campaign is the most suitable for your business or research activity?


 

For both academics and companies, TRECVID is far away the most suited evaluation campaigns for their business or research activity, followed by ImageCLEF, MIREX and MediaEval. Notice that these results might be biased by the proportion of respondent’s coming from the TRECVID community. But still, according to other statistics on the number of participants to these different campaigns (provided in D3.3) it is highly believable that TRECVID is the most popular one. In 2011 for instance, the number of participants was 73 at TRECVID, 43 at ImageCLEF, 40 at MIREX, 39 at MediaEval, 25 at PASCAL VOC, 15 at SHREC.
Q2.2 To your opinion, the challenges measured in public evaluation campaigns are

 

Q2.3 To your opinion, the evaluation criteria used in public evaluation campaigns are

 

 

Both academics and companies consider that challenges measured in public evaluation campaign as well the used evaluation criteria are reasonably relevant and very relevant for about 20% of them.

 

 

 

Q2.4 In the future do you plan to

 

 

An important conclusion of this graphic is that almost 60% of the companies who responded to the questionnaire plan to use technologies selected as the best ones within benchmarking campaigns.

 

Future intentions of respondents about their participation in benchmarking campaigns show a stable interest compared to actual participation (45% of respondent’s companies and 70% of respondent’s academics did participate in a campaign in a the past).

 

PART 2 Synthesis

- There is an agreement on the relevance of existing benchmarks

- 60% of companies plan to use technologies selected by benchmarks

- Attractiveness to participate in and organize public benchmarks is still there

 

 

 

PART3 of the questionnaire

 

Scientific Evaluation Criteria

 

 

Q3.1 What are the best criteria that you think should be taken into account when benchmarking multimedia IR components?

 

Overall results

 

Detailed results

 

Big comp

Small comp

Develop

Academic

Manager

Student

Top-1

Effectiveness

Effectiveness

Effectiveness

Effectiveness

Effectiveness

Effectiveness

Top-2

Scalability

Efficiency

 

Scalability

Scalability

 

Scalability

 

Efficiency

Worst

GUI/ergonomy

User satisfaction

Diversity, exploration

 

GUI/ergonomy

 

GUI/ergonomy

 

GUI/ergonomy

 

 

User satisfaction criteria alone

In top-2

0,56

 

0,20

 

0,43

 

0,36

 

0,33

0,37

 

The results show that according to all groups of respondents effectiveness is the best criteria to evaluate multimedia IR components, followed by scalability and efficiency. Ergonomy of the Graphical User Interface is not considered an important criterion in such evaluations.  Looking at the user satisfaction criterion alone, we see that a majority of big companies would like to see some user trials in benchmarking campaigns.

 

 

Q3.2 What criteria do you use to judge that a scientific article is an important contribution?

 

Overall results

 

Detailed results

 

Big comp

Small comp

Develop

Academic

Manager

Student

Top-1

Scientific excellence, citations

Experiments

Experiments

Experiments

Scientific excellence, citations

 

Claims

Top-2

Third parties expes

Claims

Claims

Claims

Experiments

Experiments

Worst

Claims

Theoretical statements

 

Discussions in conferences

Theoretical statements

 

Theoretical statements

 

 

Discussions in conferences

 

 

Scientific excellence and biblio-metrics (number of citations, H-index, etc.) are ranked first by managers and big companies. On the other side, experimental results are ranked first by small companies, developers and academics. Finally, claims of the authors of a paper are ranked first by students whereas they are among the worst criterion for big companies and managers.

Overall, we can remark that:

- Confidence in claims decreases with financial impact of the respondents

- Confidence in research community increases with financial impact of the respondents

- Relevance of experimental results is quite stable over the different groups and on the average the best criterion


Q3.3 What are the greatest difficulties in the scientific evaluation of multimedia retrieval?

 

Overall results

Detailed results

 

 

Big comp

Small comp

Develop

Academic

Manager

Student

Top-1

Data

Data

Data

Data

Data

Data

Top-2

Human resources

Evaluation protocol

Human resources

Human resources

Human resources

Human resources

Worst

Hardware resources

 

Hardware resources

 

Hardware resources

 

Hardware resources

 

Hardware resources

 

Hardware resources

 

The results clearly show that according to all groups of respondents data availability is most critical issue in evaluating multimedia retrieval technologies. Human resources appear as the second main limitation whereas hardware resources do not appear as a problem. This last point has to be mitigated by the fact that the scale of currently available data is relatively small compared to real-world data. So that if the main limitation (data availability) was solved, it is probable that hardware limitations would become more critical to process (to process very large amount of data).

 

PART 3 Synthesis

 

- Effectiveness is considered as the top-1 evaluation criterion and this validates the approach of current benchmarking campaigns. It is followed by scalability and Efficiency concerns. Only big companies are convinced by human-centered evaluation as a complementary criterion to be used in benchmarking campaigns.

- Criteria used to evaluate scientific publications are diverse and evolve with the financial impact of the underlying decisions to be taken. Experimental results are the most consensual criteria.

- Data availability is most critical issue in evaluating multimedia retrieval technologies

 

Chorus
Shorus
Chorus

THIS WEBSITE HAS BEEN BUILT UPON THE EFFORTS OF ITS PREDECESSOR PROJECT CHORUS

About Chorus+
Partners