Contact Information

If you are interested in participating, please register here: Click Here

Competition Rules

The competition on Visual Question Answering on Business Documents is open for participants from both industry and academia. The competition is open for both students and professionals who want to make a contribution to the field of Indic handwriting text recognition. Below we describe the tentative rules for our competition.

  • Each participant team can include up to a maximum of 10 people from one or more affiliations. For the sake of fairness to smaller research groups, we will not allow bigger teams to participate as a single team.
  • Any team or individual can create an id using mail-id to participate in the competition. The creation of multiple ids of the same team or individual is strictly prohibited.
  • There is no restriction on the number of teams from an institute/organization. However, there should not be common individuals in the teams from the same institute/organization.
  • One person can only participate on one team. Mentors included. No exceptions.
  • Any registered team downloads the training and validation sets to develop models for any specific/all scripts and evaluates on the test set. The team uploads the obtained results (in the specific format) to the challenge website.
  • The organization team will verify the results format. For numeric questions, we compute both ANLS and percentage of absolute distance from the ground truth answer. A weighted (0.5, 0.5) score of these two scores is computed for each submitted answer to obtain a score. For text type questions, evaluation is done only based on the ANLS score. Finally, we compute a category-wise averaging of the scores of each submitted answer to come-up with the leaderboard in decreasing order of the aggregated scores. The category wise weights are as follows:
    • Category 1 : 0.25
    • Category 2 : 0.4
    • Category 3 : 0.5
    • Categoty 4 : 0.75
    • Category 5 : 1.0
  • Each team is free to use any open-source tool for OCR transcriptions if required.
  • As a part of the submission, participants are required to submit a report which should include a high-level description of the pipeline developed along with the details of any open-source framework used in their code.
  • The team member can update the results until the end of the competition date. The leaderboard displays the scores on the updated results.
  • In a day, one team can update the results 5 times only.
  • To verify the results uploaded by the team, the organizer requests the team member to upload the final/updated trained model as well as inference code along with a ReadMe file on the competition website. If there is a mismatch between results uploaded by the team and results obtained by reference code and the trained model submitted by the team, the team will be strictly disqualified from the competition. In such a case, the leaderboard will not display results of this particular team.
  • Participants will be allowed to use their own datasets for training. However, they must notify the organizers about this.
  • Participants must provide a description of the methods used to produce the results submitted. In the final competition paper, we will summarize these descriptions when we describe the submitted systems. If external datasets were used, participants will also have to provide a full description of these. We reserve the right to disqualify submissions which do not provide a sufficiently detailed description of their system.
  • Winner and Runner up teams will have certificates listing the names of their members in the exact format and order as they were registered.