IIIT-Synthetic-IndicSTR-Manipu

Language

Manipuri

Modality

Scene Text

Details Description

The IIIT-Synthetic-IndicSTR-Manipuri dataset consists of synthetically created 2M word images along with their corresponding annotations. To create synthetic images, freely available Unicode fonts are used to render synthetic word images. We use ImageMagick, Pango, and Cairo tools to render text onto images. To mimic the typical document images, we generate images whose background is always lighter (higher intensity) than the foreground. Each word is rendered as an image using a random font. Font size, font styling such as bold and italic, foreground and background intensities, kerning, and skew are varied for each image to generate a diverse set of samples. A random one-fourth of the images are smoothed using a Gaussian filter with a standard deviation (𝜎) of 0.5. Finally, all the images are resized to a height of 32 while keeping the original aspect ratio. This dataset is divided into Training, Validation, and Test Sets consisting of 1.5M, 0.5M, and 0.5M word images and their corresponding ground truth transcriptions. There are 1,13,632 Manipuri words in the training set.

Training Set:

train.zip contains folder named “images” with 1.5M word level images, “train_gt.txt” containing image name and ground truth text separated by “Tab space” and “vocabulary.txt” contains list of 1,13,632 words in the Training set.

Validation Set:

val.zip contains folder named “images” with 0.5M word level images, and “val_gt.txt” containing image name and ground truth text separated by “Tab space”.

Test Set:

test.zip contains folder named “images” with 0.5M word level images, and “test_gt.txt” containing image name and ground truth text separated by “Tab space”.

Downloads

Train Test Val Logout

Sample Word Level Images from Training Set

Image Ground Truth
āϤ⧋āϞ⧋āĻ•
āϞ⧈āϜāĻ–ā§āϰāϏāύ⧁
āĻ™āĻžāχāϰāĻŽāĻĻ⧁āύāĻž
āĻ•āĻ‚āϞāĻŦāĻĻāĻž
āϞ⧁āĻ™ā§āĻ—āĻŋāĻĻ⧌āύāĻž
āĻĒāĻžāĻ•āĻ–āύ
⧟āĻžāĻ“āĻŦāĻĻ⧁āϏ⧁
āϛ⧁āϟāĻŋ
āĻšā§‹āĻ‚āϞāĻ•āχ
āĻšā§ˆāϰāĻŋāύ⧇
āϏāύāĻžāύ⧋
⧟āĻžāϰāĻ•ā§āϞ⧇
āĻļāĻžāĻ“āĻ—ā§ŽāĻĒāĻĻāĻž
āϞ⧈āĻĻ⧌
āϞ⧈āϰ⧁āĻŦāĻĻ⧁
āύ⧁āĻĒā§€-āĻ…āĻ™āĻžāĻ‚āĻļāĻŋāĻ‚āύāĻž
āĻ•āύāĻžāĻ—ā§€āĻĻāĻŽāĻ•ā§āύ⧋
āĻļāĻŋā§ŽāĻĒā§āϰāĻž
āĻ•ā§ŽāĻ–āĻŋāĻŦāύāĻŋ
ā§ąāĻžāĻ°ā§āĻĻāϤāĻž
āĻšāĻžāĻ¨ā§āύāĻĻāĻŋ
āĻšā§‹āĻ‚āĻ–ā§āϰāĻŦāĻĻāĻž
āϞ⧁āĻŽāĻžāĻ‚āĻĻāĻž
āĻšāĻžā§āϜāĻŋāĻ˛ā§āϞāĻ•ā§āϞāĻŦāĻž
āψāϚāĻžāĻ“āϗ⧁āĻŽ
āϚāϞāĻžāχāϰāĻŋāĻŦāύāĻŋāύāĻž
āĻ…ā§ąāĻž-āύ⧁āĻ‚āĻļāĻŋ
āϚ⧁āĻŽā§āύāĻĻāĻž
āĻļāĻžāϰāϏāĻŋ
āĻ™āĻžāχāϰāĻ•āĻĒāĻĻāĻŋ
āĻ…āĻĢāĻžāĻ“āĻŦā§€āϏāĻŋāĻĻāĻŋ
āĻĒā§‹ā§ŽāϞ⧁āĻŽāĻĻ⧁
āϞāĻŋā§ŽāϞ⧁āĻŦāĻĻ⧁āύāĻž
āĻĒ⧁āύāĻļāĻŋāĻ˛ā§āϞāĻŋ
ā§ąāĻžāĻ™āĻžāĻ‚-ā§ąāĻžāĻšā§ˆāĻļāĻŋāĻ‚āĻĻ⧁
āĻĒāĻ™āϞāĻŽā§āĻŦāĻž
ā§ąāĻžāĻ™āϞ
āϏ⧁āĻŽāĻŋāϤāĻžāĻ—ā§€
āχāĻļ⧇āĻ‚
āĻŠā§ŽāĻ•āύāĻŋ
āĻšā§āĻŽāϜāĻŋāĻ˛ā§āϞāĻ•āĻĒāĻž
āĻ•āĻžā§ŸāĻĻā§‹āĻ™āĻĢāĻŽ
āĻ•āĻ•āĻĨā§ŽāĻĒā§€āϰ⧋
āϚāĻžāύāĻŋāĻ‚āĻĻ⧇āϕ⧋
āĻĨāĻŋāĻ˛ā§āϞāĻ•āĻĒāĻž
āύāĻžāĻ•ā§āϤāĻ—ā§€
āĻŦāĻŦāĻžāĻŦ⧁
āϚāĻžāωāĻŦā§€āύāĻž
āϤ⧌āϜāϰāĻ•ā§āĻ¤ā§āϰāĻŋāĻŦāύāĻŋ
āĻŦāĻ¨ā§āĻĻāĻŋāϗ⧁āĻŽ
āĻŽā§ˆāύ⧀āĻ‚
āĻ…āĻļāĻŽā§āĻŦāĻž
āĻšā§ˆāĻĨāϰāϗ⧇
āĻļ⧁āĻĨāĻžāύ⧇āĻĻ⧁
āĻĒ⧁āϰāĻŋāĻŦāĻž
āĻšā§€āĻĒā§āϞāĻŋāĻŦāĻž
āĻ–āĻ¨ā§āĻĨāύāϰāĻ—āĻž
āĻĢ⧌āĻ°ā§‹ā§ŸāύāĻž
āĻˇā§āĻŸā§‡āύ⧋
āĻŽāĻžā§ŸāĻĨā§€āĻŦāĻĻāχ
āϞ⧂āĻ•ā§āϞāĻžāĻ™
āύ⧋āĻ•āĻĒāĻĻ⧁āĻĻāĻž
āĻ…āϤ⧌āĻŽā§āĻŦā§€āύāĻž
āĻŽā§‹āχāĻŦ⧁āĻ‚āĻ—ā§€
āĻĨāĻŋāĻ‚āϞāĻ—āύāĻŋ
āĻŽāĻĢāĻŽāĻĻ⧁āĻŦ⧁
āĻŽāĻĨā§‹āĻ‚-āĻŽāϰāĻŽ
āϞāĻŽā§ŸāĻžāύāĻŦāύāĻŋāĨ¤
āϧ⧀āϰ⧇āύāĻ—ā§€
āĻŦāĻžāĻŦāĻžāĻ—
āĻŽāϤ⧇āĻ•ā§āϕ⧀
⧟āĻžāĻ“āϰāϰ⧋āχāĻĻāĻŦā§€
āĻļāĻ™āĻ—ā§‹ā§ŸāĻ—ā§€
āϤāĻžā§āϜāϰ⧇āĻĻāĻž
āχāĻĒā§‹āĻšāϞāύāĻž
āĻĒ⧁āύāĻ–ā§ŽāϞāĻŋ
āĻļāĻ™ā§āĻ—āĻŋāύ
āĻ–ā§‹āĻ™āωāĻĒā§āύāϚāĻŋāĻ‚āĻŦāĻž
āĻļāĻ–ā§€āύāĻž
āϞ⧇āĻ•āϚāϰ
āϚāĻžāĻŽāĻŋāĻ¨ā§āύāĻ—āύāĻŋ
āĻŽā§€āĻ—ā§€āĻĻāĻŋāύ⧇
āϚāĻ™āϞāĻ•ā§āϤ⧇
āĻšā§ŽāϞāĻŋāĻŦāύāĻŋ
āĻĒāĻžāĻ‚āĻŦāĻŋāĻ–āĻŋāĻŦāĻž
āĻŸā§‡āĻ•ā§āύāĻŋāϕ⧇āϞ
āĻŽāĻžā§ŸāĻĨā§€āĻ–āĻŋ
⧟āĻžāχāĻĢāĻŦā§€
āĻ…āϤ⧇āĻ•-āĻ…āϰāĻžāĻ•
āĻŦ⧇āĻ—āĻļāĻŋ
āĻļā§‚āϜ-āύ⧋āĻŽāϜāĻŦāĻž
āϞāĻžāωāϰāĻ•āĻĒāĻĻāĻž
āĻšāĻžā§ŸāĻĢā§‡ā§ŽāύāĻž
āĻ¤ā§āϰ⧋āĻ‚
āĻĨ⧁āĻ—āĻžāχāύāύāĻŦāĻ—ā§€āĻĻāĻŽāĻ•
āĻĨā§‹āĻ•ā§āĻ•āĻĻ⧌āĻŦāϰ⧋
āĻ•ā§ā§āĻĒā§āύāĻĻāĻŋ
āĻšāĻ•āĻļ⧇āϞ
āĻŽāĻžāύ-āϚāĻžāĻ‚
āĻĨā§‹āĻ™āύāĻžāĻ“āϏāĻŋāĻĻāĻž

Citation

If you use this dataset, please refer these papers

@inproceedings{mathew2017benchmarking, 
  title={Benchmarking scene text recognition in Devanagari, Telugu and Malayalam}, 
  author={Mathew, Minesh and Jain, Mohit and Jawahar, CV}, 
  booktitle={2017 14th IAPR international conference on document analysis and recognition (ICDAR)}, 
  volume={7}, 
  pages={42--46}, 
  year={2017}, 
  organization={IEEE} 
} 

@inproceedings{gunna2021transfer, 
  title={Transfer learning for scene text recognition in Indian languages}, 
  author={Gunna, Sanjana and Saluja, Rohit and Jawahar, CV}, 
  booktitle={International Conference on Document Analysis and Recognition}, 
  pages={182--197}, 
  year={2021}, 
  organization={Springer} 
} 

@inproceedings{lunia2023indicstr12, 
  title={IndicSTR12: A Dataset for Indic Scene Text Recognition}, 
  author={Lunia, Harsh and Mondal, Ajoy and Jawahar, CV}, 
  booktitle={International Conference on Document Analysis and Recognition}, 
  pages={233--250}, 
  year={2023}, 
  organization={Springer} 
} 

Feedback form