IIIT-Synthetic-IndicSTR-Assame

Language

Assamese

Modality

Scene Text

Details Description

The IIIT-Synthetic-IndicSTR-Assamese dataset consists of synthetically created 2M word images along with their corresponding annotations. To create synthetic images, freely available Unicode fonts are used to render synthetic word images. We use ImageMagick, Pango, and Cairo tools to render text onto images. To mimic the typical document images, we generate images whose background is always lighter (higher intensity) than the foreground. Each word is rendered as an image using a random font. Font size, font styling such as bold and italic, foreground and background intensities, kerning, and skew are varied for each image to generate a diverse set of samples. A random one-fourth of the images are smoothed using a Gaussian filter with a standard deviation (𝜎) of 0.5. Finally, all the images are resized to a height of 32 while keeping the original aspect ratio. This dataset is divided into Training, Validation, and Test Sets consisting of 1.5M, 0.5M, and 0.5M word images and their corresponding ground truth transcriptions. There are 77,352 unqiue Assamese words in the training set.

Training Set:

train.zip contains folder named “images” with 1.5M word level images, “train_gt.txt” containing image name and ground truth text separated by “Tab space” and “vocabulary.txt” contains list of 77,352 words in the Training set.

Validation Set:

val.zip contains folder named “images” with 0.5M word level images, and “val_gt.txt” containing image name and ground truth text separated by “Tab space”.

Test Set:

test.zip contains folder named “images” with 0.5M word level images, and “test_gt.txt” containing image name and ground truth text separated by “Tab space”.

Downloads

To download Train, Test or Val data, please Login

Login Sign Up

Sample Word Level Images from Training Set

Image Ground Truth
āĻāĻ–āĻ¨-āĻĻā§āĻ–āĻ¨āĻ•ā§ˆ
ā§Žā§Ģ
ā§§ā§Ž
āĻŽāĻžāĻšāĻžāĻ¤ā§āĻŽā§āĻ¯
āĻ¨āĻŋāĻœāĻ¸ā§°ā§āĻŦāĻļāĻ•ā§āĻ¤āĻŋ-
āĻšāĻžāĻŸāĻŦāĻžā§°ā§°
āĻŽāĻŋāĻ˛āĻ¨-āĻ†āĻ•ā§°ā§‡
āĻ…āĻŸā§āĻŸāĻšāĻžāĻ¸ā§āĻ¯
āĻ¸ā§āĻŽāĻžāĻŖ
āĻ…ā§°ā§ąāĻŋāĻ¨ā§āĻĻā§‡
āĻĢā§āĻ˛āĻ¨āĻŋāĻŦāĻžā§°ā§€
āĻĒāĻžāĻ—ā§āĻ˛āĻŋ
āĻŦāĻšāĻžā§°āĻĒā§°āĻž
āĻ‰ā§°āĻ¨ā§āĻ¤
āĻĒā§°āĻŋāĻ˛āĻšā§‡āĻāĻ¤ā§‡āĻ¨
-āĻŦāĻ¨ā§āĻĻā§€
āĻšā§ā§°-āĻšā§ā§°āĻžāĻ‡
āĻ ā§‡āĻ™ā§€ā§ŸāĻž
āĻ­āĻ¨ā§€ā§Ÿā§‡āĻ•ā§°
āĻ…ā§ąāĻļā§‡
āĻĻā§ƒāĻ¸ā§āĻ¯āĻ¤ā§‡
āĻ•āĻ˛ā§āĻ¯āĻžāĻŖā§‡
āĻ˜ā§œā§€āĻŸā§‹
āĻĒā§°āĻŋāĻ˛ā§‡
āĻŦāĻŋāĻ­ā§€āĻˇāĻŖ-āĻĨāĻžāĻ¨
āĻ•āĻžāĻ˛āĻŋāĻŽāĻžā§°
āĻ•āĻŋāĻ›ā§āĻ›ā§‚ā§°
āĻœāĻžāĻ¨āĻžāĻŽā§āĻ¯āĻ§āĻŽāĻ‚
āĻŦāĻžāĻŖ-āĻ­āĻ—āĻĻāĻ¤ā§āĻ¤-āĻšā§āĻ•āĻžāĻĢāĻž
ā§¯ā§§ā§Ģ
āĻŦāĻŋāĻ˛āĻžāĻ¸-āĻ­ā§‹āĻ—āĻ¤
āĻ¨ā§ŸāĻ¨āĻ¤
āĻ‰āĻĻā§āĻ§āĻ¤ā§‹
āĻĒā§āĻ°āĻžā§°ā§āĻĨāĻŋā§°
āĻļāĻ•ā§āĻ¤ā§āĻ¯āĻž
āĻ“āĻĒā§°āĻ¤āĻ˛āĻžāĻ¤
āĻĻā§āĻ–āĻœāĻŋāĻ•ā§ˆ
āĻ•āĻžāĻ—āĻœāĻ–āĻ¨ā§°
āĻ­ā§‹āĻ—āĻžā§Ÿā§‡āĻ“
āĻ…āĻ¸ā§āĻ¤ā§ā§°-āĻļāĻ¸ā§āĻ¤ā§āĻ°
āĻ¤ā§€ā§°āĻ¤
āĻ›āĻžāĻĄāĻŧā§‹
āĻ†āĻ—āĻ¤-
āĻ­āĻ•āĻ¤āĻŋ-ā§°āĻ•āĻ¤āĻŋ
ā§¯ā§Žā§Ģ
āĻ•ā§°āĻŋāĻŦā§°ā§‹
āĻ¤ā§ā§°ā§āĻŸāĻŋ
āĻ¤āĻŋāĻ˛ā§‡
āĻŽā§‚ā§°ā§āĻĻā§āĻ§āĻž
āĻ¯ā§‡āĻ¨āĻŽāĻ¤ā§‡
āĻĒā§āĻ°āĻŋā§ŸāĻœāĻ¨āĻ¤ā§‹
āĻ…āĻ­āĻŋāĻŦāĻžāĻĻāĻ¨
āĻŦā§āĻĸāĻŧā§€ā§Ÿā§‡
āĻ˛āĻžāĻ˛ā§€ā§°ā§‡
āĻ‰āĻ āĻžāĻ¤
āĻ¸āĻžāĻāĻ•ā§‹
āĻ­āĻžāĻ“āĻ¨āĻžāĻ¤ā§‡
āĻĒā§āĻŖā§āĻ¯āĻšā§Ÿ
āĻ•āĻžāĻŽāĻ¨āĻž
āĻ•āĻĨāĻžāĻŸā§‹āĻ“
āĻ¯āĻžāĻĻāĻŋ
āĻĒā§‹āĻšāĻžā§°āĻŋā§°
āĻļāĻŋā§°ā§€āĻˇ
āĻ…āĻ¤āĻŋāĻĨāĻŋ
āĻ¸āĻšāĻ¸ā§ā§°-āĻ¨āĻžāĻŽ
āĻŦāĻŋāĻ˛āĻžāĻ˛āĻŋāĻ˛ā§ˆ
āĻšāĻžāĻ‡āĻ•ā§‹āĻ˛āĻ¤ā§‡
āĻ•ā§‡āĻ¨ā§‡āĻ§ā§°āĻŖā§°
āĻŦāĻŸāĻ˛āĻŸā§‹ā§°
āĻ…āĻĒā§°āĻžāĻ§ā§€ā§°
āĻ­ā§āĻŦāĻ¨-āĻŦāĻŋāĻœā§Ÿā§€
āĻŦāĻšāĻŋāĻŦāĻ˛ā§ˆ
āĻ–ā§‡ā§ŸāĻžāĻ˛
āĻ†āĻ˜āĻžāĻ¤
āĻ…āĻ¸ā§āĻ¤ā§ā§°ā§°
āĻ§ā§āĻ¨ā§
āĻ›ā§āĻŸā§‡
āĻĒāĻŋā§°ā§
āĻļāĻžāĻāĻ•āĻ¨āĻž
āĻŦā§āĻœāĻŋā§ŸāĻž
āĻ¯āĻžāĻšāĻžāĻ¨ā§āĻ¤
āĻ¸āĻ‚āĻ•āĻ˛ā§āĻĒ
āĻ—āĻ›āĻŦā§‹ā§°
āĻ‡āĻ•āĻžāĻŖā§‡
āĻ•āĻžāĻĒā§‹ā§°āĻŸā§‹
āĻŦā§āĻĸāĻŧāĻž-āĻ†āĻ™ā§āĻ˛āĻŋāĻŸā§‹ā§°ā§‡
āĻ‰āĻ¨ā§āĻ¨ā§€āĻ¤
āĻœāĻšāĻž
āĻĻāĻŋā§ąāĻ¸ā§‡
āĻāĻˇāĻžā§°āĻ•ā§ˆ
āĻ›āĻžāĻŽāĻžā§°
āĻœāĻžāĻāĻĒā§‡
āĻĒāĻŋāĻšāĻĻāĻŋāĻ¨āĻžāĻ–āĻ¨ā§‹
āĻ āĻžāĻ‡āĻĄā§‹āĻ–ā§°
āĻŦāĻžāĻĻā§āĻ¯ā§‡-āĻ­āĻŖā§āĻĄā§‡
āĻļāĻŋāĻ•āĻŋ
āĻ•āĻžāĻāĻšāĻŋ-āĻŦāĻžāĻŸāĻŋā§°
āĻšāĻŋāĻ¤āĻž-āĻœā§āĻ‡āĻ•ā§ā§°āĻž
āĻ•āĻžāĻ¨ā§āĻĻāĻŋāĻ›āĻž
āĻĻāĻžāĻāĻ¤āĻŋā§°

Citation

If you use this dataset, please refer these papers

@inproceedings{mathew2017benchmarking, 
  title={Benchmarking scene text recognition in Devanagari, Telugu and Malayalam}, 
  author={Mathew, Minesh and Jain, Mohit and Jawahar, CV}, 
  booktitle={2017 14th IAPR international conference on document analysis and recognition (ICDAR)}, 
  volume={7}, 
  pages={42--46}, 
  year={2017}, 
  organization={IEEE} 
} 

@inproceedings{gunna2021transfer, 
  title={Transfer learning for scene text recognition in Indian languages}, 
  author={Gunna, Sanjana and Saluja, Rohit and Jawahar, CV}, 
  booktitle={International Conference on Document Analysis and Recognition}, 
  pages={182--197}, 
  year={2021}, 
  organization={Springer} 
} 

@inproceedings{lunia2023indicstr12, 
  title={IndicSTR12: A Dataset for Indic Scene Text Recognition}, 
  author={Lunia, Harsh and Mondal, Ajoy and Jawahar, CV}, 
  booktitle={International Conference on Document Analysis and Recognition}, 
  pages={233--250}, 
  year={2023}, 
  organization={Springer} 
} 

Feedback form