{"id":1391,"date":"2020-07-28T03:02:18","date_gmt":"2020-07-28T03:02:18","guid":{"rendered":"https:\/\/muthu.co\/?p=1391"},"modified":"2021-05-24T02:31:44","modified_gmt":"2021-05-24T02:31:44","slug":"all-tesseract-ocr-options","status":"publish","type":"post","link":"http:\/\/write.muthu.co\/all-tesseract-ocr-options\/","title":{"rendered":"All Tesseract OCR options"},"content":{"rendered":"\n

This is for my reference and this might come in handy for others too.<\/p>\n\n\n\n

All Tesseract options<\/h2>\n\n\n\n
$ tesseract --help-extra<\/code><\/pre>\n\n\n\n
Usage:\n  tesseract --help | --help-extra | --help-psm | --help-oem | --version\n  tesseract --list-langs [--tessdata-dir PATH]\n  tesseract --print-parameters [options...] [configfile...]\n  tesseract imagename|imagelist|stdin outputbase|stdout [options...] [configfile...]\nOCR options:\n  --tessdata-dir PATH   Specify the location of tessdata path.\n  --user-words PATH     Specify the location of user words file.\n  --user-patterns PATH  Specify the location of user patterns file.\n  -l LANG[+LANG]        Specify language(s) used for OCR.\n  -c VAR=VALUE          Set value for config variables.\n                        Multiple -c arguments are allowed.\n  --psm NUM             Specify page segmentation mode.\n  --oem NUM             Specify OCR Engine mode.\nNOTE: These options must occur before any configfile.\nPage segmentation modes:\n  0    Orientation and script detection (OSD) only.\n  1    Automatic page segmentation with OSD.\n  2    Automatic page segmentation, but no OSD, or OCR.\n  3    Fully automatic page segmentation, but no OSD. (Default)\n  4    Assume a single column of text of variable sizes.\n  5    Assume a single uniform block of vertically aligned text.\n  6    Assume a single uniform block of text.\n  7    Treat the image as a single text line.\n  8    Treat the image as a single word.\n  9    Treat the image as a single word in a circle.\n 10    Treat the image as a single character.\n 11    Sparse text. Find as much text as possible in no particular order.\n 12    Sparse text with OSD.\n 13    Raw line. Treat the image as a single text line,\n       bypassing hacks that are Tesseract-specific.\nOCR Engine modes: (see https:\/\/github.com\/tesseract-ocr\/tesseract\/wiki#linux)\n  0    Legacy engine only.\n  1    Neural nets LSTM engine only.\n  2    Legacy + LSTM engines.\n  3    Default, based on what is available.\nSingle options:\n  -h, --help            Show minimal help message.\n  --help-extra          Show extra help for advanced users.\n  --help-psm            Show page segmentation modes.\n  --help-oem            Show OCR Engine modes.\n  -v, --version         Show version information.\n  --list-langs          List available languages for tesseract engine.\n  --print-parameters    Print tesseract parameters.<\/code><\/pre>\n\n\n\n
\n\n\n\n

CLI Examples<\/h2>\n\n\n\n
Command<\/strong> Example<\/th>Notes<\/th><\/tr><\/thead>
tesseract sample_images\/image2.jpg stdout<\/code><\/td>To print the output to standard output<\/td><\/tr>
tesseract sample_images\/image2.jpg sample_images\/output<\/code><\/td>By default the output will be named outbase.txt.<\/td><\/tr>
tesseract sample_images\/image2.jpg sample_images\/output -l eng<\/code><\/td>-l is for language. English is default is no argument is provided. <\/td><\/tr>
tesseract --list-langs<\/code><\/td>To list available languages with codes. <\/td><\/tr>
tesseract image.png out -l eng+deu+fra+ita+spa+por<\/code><\/td>To use multiple languages together.<\/td><\/tr>
sudo apt install tesseract-ocr-ara<\/code><\/td>Install arabic language defined by langcode ara.<\/td><\/tr>
tesseract sample_images\/image2.jpg sample_images\/output --psm 10<\/code><\/td>PSM means Page Segmentation.
If you want to have single character recognition, set psm = 10<\/td><\/tr>
tesseract sample_images\/image2.jpg stdout -l eng --psm 6 -c tessedit_char_whitelist=abcdefghijklmnopqrstuvwxyz<\/code><\/td>To limit the characters using whitelist. Use the -c config option for extra parameters<\/td><\/tr>
tesseract sample_images\/image2.jpg stdout --oem 3<\/code><\/td>Using the default OEM<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n
\n\n\n\n

Quickstart guide for pytesseract<\/h3>\n\n\n\n
try:\n    from PIL import Image\nexcept ImportError:\n    import Image\nimport pytesseract\n# If you don't have tesseract executable in your PATH, include the following:\npytesseract.pytesseract.tesseract_cmd = r'<full_path_to_your_tesseract_executable>'\n# Example tesseract_cmd = r'C:\\Program Files (x86)\\Tesseract-OCR\\tesseract'\n# Simple image to string\nprint(pytesseract.image_to_string(Image.open('test.png')))\n# French text image to string\nprint(pytesseract.image_to_string(Image.open('test-european.jpg'), lang='fra'))\n# In order to bypass the image conversions of pytesseract, just use relative or absolute image path\n# NOTE: In this case you should provide tesseract supported images or tesseract will return error\nprint(pytesseract.image_to_string('test.png'))\n# Batch processing with a single file containing the list of multiple image file paths\nprint(pytesseract.image_to_string('images.txt'))\n# Timeout\/terminate the tesseract job after a period of time\ntry:\n    print(pytesseract.image_to_string('test.jpg', timeout=2)) # Timeout after 2 seconds\n    print(pytesseract.image_to_string('test.jpg', timeout=0.5)) # Timeout after half a second\nexcept RuntimeError as timeout_error:\n    # Tesseract processing is terminated\n    pass\n# Get bounding box estimates\nprint(pytesseract.image_to_boxes(Image.open('test.png')))\n# Get verbose data including boxes, confidences, line and page numbers\nprint(pytesseract.image_to_data(Image.open('test.png')))\n# Get information about orientation and script detection\nprint(pytesseract.image_to_osd(Image.open('test.png')))\n# Get a searchable PDF\npdf = pytesseract.image_to_pdf_or_hocr('test.png', extension='pdf')\nwith open('test.pdf', 'w+b') as f:\n    f.write(pdf) # pdf type is bytes by default\n# Get HOCR output\nhocr = pytesseract.image_to_pdf_or_hocr('test.png', extension='hocr')\n# Executing with extra parameters supported by Tessseract command line \nimage_to_data(image, lang=None, config='', nice=0, output_type=Output.STRING, timeout=0, pandas_config=None)\n# Example of adding any additional options. custom_oem_psm_config = r'--oem 3 --psm 6' pytesseract.image_to_string(image, config=custom_oem_psm_config)\n\n#Supported output types for pytesseract, Output.BYTES, Output.DICT, Output.STRING\n# usage\nprint(pytesseract.image_to_data(region.image, config='--psm 10', output_type='dict'))\nprint(pytesseract.image_to_data(region.image, config='--psm 10', output_type='string'))\nprint(pytesseract.image_to_data(region.image, config='--psm 10', output_type='bytes'))<\/code><\/pre>\n\n\n\n
\n\n\n\n

List of all Tesseract Parameters as of version 3.0<\/h2>\n\n\n\n
$ tesseract --print-parameters<\/code><\/pre>\n\n\n\n
Name<\/strong><\/th>Default value<\/strong><\/th>Description<\/th><\/tr><\/thead>
textord_debug_tabfind<\/td>0<\/td>Debug tab finding<\/td><\/tr>
textord_debug_bugs<\/td>0<\/td>Turn on output related to bugs in tab finding<\/td><\/tr>
textord_testregion_left<\/td>-1<\/td>Left edge of debug reporting rectangle<\/td><\/tr>
textord_testregion_top<\/td>-1<\/td>Top edge of debug reporting rectangle<\/td><\/tr>
textord_testregion_right<\/td>2147483647<\/td>Right edge of debug rectangle<\/td><\/tr>
textord_testregion_bottom<\/td>2147483647<\/td>Bottom edge of debug rectangle<\/td><\/tr>
textord_tabfind_show_partitions<\/td>0<\/td>Show partition bounds, waiting if >1<\/td><\/tr>
devanagari_split_debuglevel<\/td>0<\/td>Debug level for split shiro-rekha process.<\/td><\/tr>
edges_max_children_per_outline<\/td>16<\/td>Max number of children inside a character outline<\/td><\/tr>
edges_max_children_layers<\/td>4<\/td>Max layers of nested children inside a character outline<\/td><\/tr>
edges_children_per_grandchild<\/td>9<\/td>Importance ratio for chucking outlines<\/td><\/tr>
edges_children_count_limit<\/td>46<\/td>Max holes allowed in blob<\/td><\/tr>
edges_min_nonhole<\/td>14<\/td>Min pixels for potential char in box<\/td><\/tr>
edges_patharea_ratio<\/td>40<\/td>Max lensq\/area for acceptable child outline<\/td><\/tr>
textord_fp_chop_error<\/td>2<\/td>Max allowed bending of chop cells<\/td><\/tr>
textord_tabfind_show_images<\/td>0<\/td>Show image blobs<\/td><\/tr>
classify_num_cp_levels<\/td>3<\/td>Number of Class Pruner Levels<\/td><\/tr>
textord_skewsmooth_offset<\/td>4<\/td>For smooth factor<\/td><\/tr>
textord_skewsmooth_offset2<\/td>1<\/td>For smooth factor<\/td><\/tr>
textord_test_x<\/td>-2147483647<\/td>coord of test pt<\/td><\/tr>
textord_test_y<\/td>-2147483647<\/td>coord of test pt<\/td><\/tr>
textord_min_blobs_in_row<\/td>4<\/td>Min blobs before gradient counted<\/td><\/tr>
textord_spline_minblobs<\/td>8<\/td>Min blobs in each spline segment<\/td><\/tr>
textord_spline_medianwin<\/td>6<\/td>Size of window for spline segmentation<\/td><\/tr>
textord_max_blob_overlaps<\/td>4<\/td>Max number of blobs a big blob can overlap<\/td><\/tr>
textord_min_xheight<\/td>10<\/td>Min credible pixel xheight<\/td><\/tr>
textord_lms_line_trials<\/td>12<\/td>Number of linew fits to do<\/td><\/tr>
oldbl_holed_losscount<\/td>10<\/td>Max lost before fallback line used<\/td><\/tr>
editor_image_xpos<\/td>590<\/td>Editor image X Pos<\/td><\/tr>
editor_image_ypos<\/td>10<\/td>Editor image Y Pos<\/td><\/tr>
editor_image_menuheight<\/td>50<\/td>Add to image height for menu bar<\/td><\/tr>
editor_image_word_bb_color<\/td>7<\/td>Word bounding box colour<\/td><\/tr>
editor_image_blob_bb_color<\/td>4<\/td>Blob bounding box colour<\/td><\/tr>
editor_image_text_color<\/td>2<\/td>Correct text colour<\/td><\/tr>
editor_dbwin_xpos<\/td>50<\/td>Editor debug window X Pos<\/td><\/tr>
editor_dbwin_ypos<\/td>500<\/td>Editor debug window Y Pos<\/td><\/tr>
editor_dbwin_height<\/td>24<\/td>Editor debug window height<\/td><\/tr>
editor_dbwin_width<\/td>80<\/td>Editor debug window width<\/td><\/tr>
editor_word_xpos<\/td>60<\/td>Word window X Pos<\/td><\/tr>
editor_word_ypos<\/td>510<\/td>Word window Y Pos<\/td><\/tr>
editor_word_height<\/td>240<\/td>Word window height<\/td><\/tr>
editor_word_width<\/td>655<\/td>Word window width<\/td><\/tr>
pitsync_linear_version<\/td>6<\/td>Use new fast algorithm<\/td><\/tr>
pitsync_fake_depth<\/td>1<\/td>Max advance fake generation<\/td><\/tr>
textord_tabfind_show_strokewidths<\/td>0<\/td>Show stroke widths<\/td><\/tr>
textord_dotmatrix_gap<\/td>3<\/td>Max pixel gap for broken pixed pitch<\/td><\/tr>
textord_debug_block<\/td>0<\/td>Block to do debug on<\/td><\/tr>
textord_pitch_range<\/td>2<\/td>Max range test on pitch<\/td><\/tr>
textord_words_veto_power<\/td>5<\/td>Rows required to outvote a veto<\/td><\/tr>
textord_debug_images<\/td>0<\/td>Use greyed image background for debug<\/td><\/tr>
textord_debug_printable<\/td>0<\/td>Make debug windows printable<\/td><\/tr>
stream_filelist<\/td>0<\/td>Stream a filelist from stdin<\/td><\/tr>
textord_space_size_is_variable<\/td>0<\/td>If true, word delimiter spaces are assumed to have variable width, even though characters have fixed pitch.<\/td><\/tr>
textord_tabfind_show_initial_partitions<\/td>0<\/td>Show partition bounds<\/td><\/tr>
textord_tabfind_show_reject_blobs<\/td>0<\/td>Show blobs rejected as noise<\/td><\/tr>
textord_tabfind_show_columns<\/td>0<\/td>Show column bounds<\/td><\/tr>
textord_tabfind_show_blocks<\/td>0<\/td>Show final block bounds<\/td><\/tr>
textord_tabfind_find_tables<\/td>1<\/td>run table detection<\/td><\/tr>
textord_tabfind_show_color_fit<\/td>0<\/td>Show stroke widths<\/td><\/tr>
devanagari_split_debugimage<\/td>0<\/td>Whether to create a debug image for split shiro-rekha process.<\/td><\/tr>
textord_show_fixed_cuts<\/td>0<\/td>Draw fixed pitch cell boundaries<\/td><\/tr>
edges_use_new_outline_complexity<\/td>0<\/td>Use the new outline complexity module<\/td><\/tr>
edges_debug<\/td>0<\/td>turn on debugging for this module<\/td><\/tr>
edges_children_fix<\/td>0<\/td>Remove boxy parents of char-like children<\/td><\/tr>
equationdetect_save_bi_image<\/td>0<\/td>Save input bi image<\/td><\/tr>
equationdetect_save_spt_image<\/td>0<\/td>Save special character image<\/td><\/tr>
equationdetect_save_seed_image<\/td>0<\/td>Save the seed image<\/td><\/tr>
equationdetect_save_merged_image<\/td>0<\/td>Save the merged image<\/td><\/tr>
gapmap_debug<\/td>0<\/td>Say which blocks have tables<\/td><\/tr>
gapmap_use_ends<\/td>0<\/td>Use large space at start and end of rows<\/td><\/tr>
gapmap_no_isolated_quanta<\/td>0<\/td>Ensure gaps not less than 2quanta wide<\/td><\/tr>
textord_heavy_nr<\/td>0<\/td>Vigorously remove noise<\/td><\/tr>
textord_show_initial_rows<\/td>0<\/td>Display row accumulation<\/td><\/tr>
textord_show_parallel_rows<\/td>0<\/td>Display page correlated rows<\/td><\/tr>
textord_show_expanded_rows<\/td>0<\/td>Display rows after expanding<\/td><\/tr>
textord_show_final_rows<\/td>0<\/td>Display rows after final fitting<\/td><\/tr>
textord_show_final_blobs<\/td>0<\/td>Display blob bounds after pre-ass<\/td><\/tr>
textord_test_landscape<\/td>0<\/td>Tests refer to land\/port<\/td><\/tr>
textord_parallel_baselines<\/td>1<\/td>Force parallel baselines<\/td><\/tr>
textord_straight_baselines<\/td>0<\/td>Force straight baselines<\/td><\/tr>
textord_old_baselines<\/td>1<\/td>Use old baseline algorithm<\/td><\/tr>
textord_old_xheight<\/td>0<\/td>Use old xheight algorithm<\/td><\/tr>
textord_fix_xheight_bug<\/td>1<\/td>Use spline baseline<\/td><\/tr>
textord_fix_makerow_bug<\/td>1<\/td>Prevent multiple baselines<\/td><\/tr>
textord_debug_xheights<\/td>0<\/td>Test xheight algorithms<\/td><\/tr>
textord_biased_skewcalc<\/td>1<\/td>Bias skew estimates with line length<\/td><\/tr>
textord_interpolating_skew<\/td>1<\/td>Interpolate across gaps<\/td><\/tr>
textord_new_initial_xheight<\/td>1<\/td>Use test xheight mechanism<\/td><\/tr>
textord_debug_blob<\/td>0<\/td>Print test blob information<\/td><\/tr>
textord_really_old_xheight<\/td>0<\/td>Use original wiseowl xheight<\/td><\/tr>
textord_oldbl_debug<\/td>0<\/td>Debug old baseline generation<\/td><\/tr>
textord_debug_baselines<\/td>0<\/td>Debug baseline generation<\/td><\/tr>
textord_oldbl_paradef<\/td>1<\/td>Use para default mechanism<\/td><\/tr>
textord_oldbl_split_splines<\/td>1<\/td>Split stepped splines<\/td><\/tr>
textord_oldbl_merge_parts<\/td>1<\/td>Merge suspect partitions<\/td><\/tr>
oldbl_corrfix<\/td>1<\/td>Improve correlation of heights<\/td><\/tr>
oldbl_xhfix<\/td>0<\/td>Fix bug in modes threshold for xheights<\/td><\/tr>
textord_ocropus_mode<\/td>0<\/td>Make baselines for ocropus<\/td><\/tr>
poly_debug<\/td>0<\/td>Debug old poly<\/td><\/tr>
poly_wide_objects_better<\/td>1<\/td>More accurate approx on wide things<\/td><\/tr>
wordrec_display_all_blobs<\/td>0<\/td>Display Blobs<\/td><\/tr>
wordrec_display_all_words<\/td>0<\/td>Display Words<\/td><\/tr>
wordrec_blob_pause<\/td>0<\/td>Blob pause<\/td><\/tr>
wordrec_display_splits<\/td>0<\/td>Display splits<\/td><\/tr>
textord_tabfind_only_strokewidths<\/td>0<\/td>Only run stroke widths<\/td><\/tr>
textord_tabfind_show_initialtabs<\/td>0<\/td>Show tab candidates<\/td><\/tr>
textord_tabfind_show_finaltabs<\/td>0<\/td>Show tab vectors<\/td><\/tr>
textord_dump_table_images<\/td>0<\/td>Paint table detection output<\/td><\/tr>
textord_show_tables<\/td>0<\/td>Show table regions<\/td><\/tr>
textord_tablefind_show_mark<\/td>0<\/td>Debug table marking steps in detail<\/td><\/tr>
textord_tablefind_show_stats<\/td>0<\/td>Show page stats used in table finding<\/td><\/tr>
textord_tablefind_recognize_tables<\/td>0<\/td>Enables the table recognizer for table layout and filtering.<\/td><\/tr>
textord_all_prop<\/td>0<\/td>All doc is proportial text<\/td><\/tr>
textord_debug_pitch_test<\/td>0<\/td>Debug on fixed pitch test<\/td><\/tr>
textord_disable_pitch_test<\/td>0<\/td>Turn off dp fixed pitch algorithm<\/td><\/tr>
textord_fast_pitch_test<\/td>0<\/td>Do even faster pitch algorithm<\/td><\/tr>
textord_debug_pitch_metric<\/td>0<\/td>Write full metric stuff<\/td><\/tr>
textord_show_row_cuts<\/td>0<\/td>Draw row-level cuts<\/td><\/tr>
textord_show_page_cuts<\/td>0<\/td>Draw page-level cuts<\/td><\/tr>
textord_pitch_cheat<\/td>0<\/td>Use correct answer for fixed\/prop<\/td><\/tr>
textord_blockndoc_fixed<\/td>0<\/td>Attempt whole doc\/block fixed pitch<\/td><\/tr>
textord_show_initial_words<\/td>0<\/td>Display separate words<\/td><\/tr>
textord_show_new_words<\/td>0<\/td>Display separate words<\/td><\/tr>
textord_show_fixed_words<\/td>0<\/td>Display forced fixed pitch words<\/td><\/tr>
textord_blocksall_fixed<\/td>0<\/td>Moan about prop blocks<\/td><\/tr>
textord_blocksall_prop<\/td>0<\/td>Moan about fixed pitch blocks<\/td><\/tr>
textord_blocksall_testing<\/td>0<\/td>Dump stats when moaning<\/td><\/tr>
textord_test_mode<\/td>0<\/td>Do current test<\/td><\/tr>
textord_pitch_scalebigwords<\/td>0<\/td>Scale scores on big words<\/td><\/tr>
textord_restore_underlines<\/td>1<\/td>Chop underlines and put back<\/td><\/tr>
textord_fp_chopping<\/td>1<\/td>Do fixed pitch chopping<\/td><\/tr>
textord_force_make_prop_words<\/td>0<\/td>Force proportional word segmentation on all rows<\/td><\/tr>
textord_chopper_test<\/td>0<\/td>Chopper is being tested.<\/td><\/tr>
classify_font_name<\/td>UnknownFont<\/td>Default font name to be used in training<\/td><\/tr>
fx_debugfile<\/td>FXDebug<\/td>Name of debugfile<\/td><\/tr>
editor_image_win_name<\/td>EditorImage<\/td>Editor image window name<\/td><\/tr>
editor_dbwin_name<\/td>EditorDBWin<\/td>Editor debug window name<\/td><\/tr>
editor_word_name<\/td>BlnWords<\/td>BL normalized word window<\/td><\/tr>
editor_debug_config_file<\/td><\/td>Config file to apply to single words<\/td><\/tr>
classify_training_file<\/td>MicroFeatures<\/td>Training file<\/td><\/tr>
debug_file<\/td><\/td>File to send tprintf output to<\/td><\/tr>
textord_underline_threshold<\/td>0.5<\/td>Fraction of width occupied<\/td><\/tr>
edges_childarea<\/td>0.5<\/td>Min area fraction of child outline<\/td><\/tr>
edges_boxarea<\/td>0.875<\/td>Min area fraction of grandchild for box<\/td><\/tr>
textord_fp_chop_snap<\/td>0.5<\/td>Max distance of chop pt from vertex<\/td><\/tr>
gapmap_big_gaps<\/td>1.75<\/td>xht multiplier<\/td><\/tr>
classify_cp_angle_pad_loose<\/td>45<\/td>Class Pruner Angle Pad Loose<\/td><\/tr>
classify_cp_angle_pad_medium<\/td>20<\/td>Class Pruner Angle Pad Medium<\/td><\/tr>
classify_cp_angle_pad_tight<\/td>10<\/td>CLass Pruner Angle Pad Tight<\/td><\/tr>
classify_cp_end_pad_loose<\/td>0.5<\/td>Class Pruner End Pad Loose<\/td><\/tr>
classify_cp_end_pad_medium<\/td>0.5<\/td>Class Pruner End Pad Medium<\/td><\/tr>
classify_cp_end_pad_tight<\/td>0.5<\/td>Class Pruner End Pad Tight<\/td><\/tr>
classify_cp_side_pad_loose<\/td>2.5<\/td>Class Pruner Side Pad Loose<\/td><\/tr>
classify_cp_side_pad_medium<\/td>1.2<\/td>Class Pruner Side Pad Medium<\/td><\/tr>
classify_cp_side_pad_tight<\/td>0.6<\/td>Class Pruner Side Pad Tight<\/td><\/tr>
classify_pp_angle_pad<\/td>45<\/td>Proto Pruner Angle Pad<\/td><\/tr>
classify_pp_end_pad<\/td>0.5<\/td>Proto Prune End Pad<\/td><\/tr>
classify_pp_side_pad<\/td>2.5<\/td>Proto Pruner Side Pad<\/td><\/tr>
textord_spline_shift_fraction<\/td>0.02<\/td>Fraction of line spacing for quad<\/td><\/tr>
textord_spline_outlier_fraction<\/td>0.1<\/td>Fraction of line spacing for outlier<\/td><\/tr>
textord_skew_ile<\/td>0.5<\/td>Ile of gradients for page skew<\/td><\/tr>
textord_skew_lag<\/td>0.02<\/td>Lag for skew on row accumulation<\/td><\/tr>
textord_linespace_iqrlimit<\/td>0.2<\/td>Max iqr\/median for linespace<\/td><\/tr>
textord_width_limit<\/td>8<\/td>Max width of blobs to make rows<\/td><\/tr>
textord_chop_width<\/td>1.5<\/td>Max width before chopping<\/td><\/tr>
textord_expansion_factor<\/td>1<\/td>Factor to expand rows by in expand_rows<\/td><\/tr>
textord_overlap_x<\/td>0.375<\/td>Fraction of linespace for good overlap<\/td><\/tr>
textord_minxh<\/td>0.25<\/td>fraction of linesize for min xheight<\/td><\/tr>
textord_min_linesize<\/td>1.25<\/td>* blob height for initial linesize<\/td><\/tr>
textord_excess_blobsize<\/td>1.3<\/td>New row made if blob makes row this big<\/td><\/tr>
textord_occupancy_threshold<\/td>0.4<\/td>Fraction of neighbourhood<\/td><\/tr>
textord_underline_width<\/td>2<\/td>Multiple of line_size for underline<\/td><\/tr>
textord_min_blob_height_fraction<\/td>0.75<\/td>Min blob height\/top to include blob top into xheight stats<\/td><\/tr>
textord_xheight_mode_fraction<\/td>0.4<\/td>Min pile height to make xheight<\/td><\/tr>
textord_ascheight_mode_fraction<\/td>0.08<\/td>Min pile height to make ascheight<\/td><\/tr>
textord_descheight_mode_fraction<\/td>0.08<\/td>Min pile height to make descheight<\/td><\/tr>
textord_ascx_ratio_min<\/td>1.25<\/td>Min cap\/xheight<\/td><\/tr>
textord_ascx_ratio_max<\/td>1.8<\/td>Max cap\/xheight<\/td><\/tr>
textord_descx_ratio_min<\/td>0.25<\/td>Min desc\/xheight<\/td><\/tr>
textord_descx_ratio_max<\/td>0.6<\/td>Max desc\/xheight<\/td><\/tr>
textord_xheight_error_margin<\/td>0.1<\/td>Accepted variation<\/td><\/tr>
classify_min_slope<\/td>0.414214<\/td>Slope below which lines are called horizontal<\/td><\/tr>
classify_max_slope<\/td>2.41421<\/td>Slope above which lines are called vertical<\/td><\/tr>
classify_norm_adj_midpoint<\/td>32<\/td>Norm adjust midpoint …<\/td><\/tr>
classify_norm_adj_curl<\/td>2<\/td>Norm adjust curl …<\/td><\/tr>
oldbl_xhfract<\/td>0.4<\/td>Fraction of est allowed in calc<\/td><\/tr>
oldbl_dot_error_size<\/td>1.26<\/td>Max aspect ratio of a dot<\/td><\/tr>
textord_oldbl_jumplimit<\/td>0.15<\/td>X fraction for new partition<\/td><\/tr>
classify_pico_feature_length<\/td>0.05<\/td>Pico Feature Length<\/td><\/tr>
pitsync_joined_edge<\/td>0.75<\/td>Dist inside big blob for chopping<\/td><\/tr>
pitsync_offset_freecut_fraction<\/td>0.25<\/td>Fraction of cut for free cuts<\/td><\/tr>
textord_tabvector_vertical_gap_fraction<\/td>0.5<\/td>max fraction of mean blob width allowed for vertical gaps in vertical text<\/td><\/tr>
textord_tabvector_vertical_box_ratio<\/td>0.5<\/td>Fraction of box matches required to declare a line vertical<\/td><\/tr>
textord_projection_scale<\/td>0.2<\/td>Ding rate for mid-cuts<\/td><\/tr>
textord_balance_factor<\/td>1<\/td>Ding rate for unbalanced char cells<\/td><\/tr>
textord_wordstats_smooth_factor<\/td>0.05<\/td>Smoothing gap stats<\/td><\/tr>
textord_width_smooth_factor<\/td>0.1<\/td>Smoothing width stats<\/td><\/tr>
textord_words_width_ile<\/td>0.4<\/td>Ile of blob widths for space est<\/td><\/tr>
textord_words_maxspace<\/td>4<\/td>Multiple of xheight<\/td><\/tr>
textord_words_default_maxspace<\/td>3.5<\/td>Max believable third space<\/td><\/tr>
textord_words_default_minspace<\/td>0.6<\/td>Fraction of xheight<\/td><\/tr>
textord_words_min_minspace<\/td>0.3<\/td>Fraction of xheight<\/td><\/tr>
textord_words_default_nonspace<\/td>0.2<\/td>Fraction of xheight<\/td><\/tr>
textord_words_initial_lower<\/td>0.25<\/td>Max inital cluster size<\/td><\/tr>
textord_words_initial_upper<\/td>0.15<\/td>Min initial cluster spacing<\/td><\/tr>
textord_words_minlarge<\/td>0.75<\/td>Fraction of valid gaps needed<\/td><\/tr>
textord_words_pitchsd_threshold<\/td>0.04<\/td>Pitch sync threshold<\/td><\/tr>
textord_words_def_fixed<\/td>0.016<\/td>Threshold for definite fixed<\/td><\/tr>
textord_words_def_prop<\/td>0.09<\/td>Threshold for definite prop<\/td><\/tr>
textord_pitch_rowsimilarity<\/td>0.08<\/td>Fraction of xheight for sameness<\/td><\/tr>
words_initial_lower<\/td>0.5<\/td>Max inital cluster size<\/td><\/tr>
words_initial_upper<\/td>0.15<\/td>Min initial cluster spacing<\/td><\/tr>
words_default_prop_nonspace<\/td>0.25<\/td>Fraction of xheight<\/td><\/tr>
words_default_fixed_space<\/td>0.75<\/td>Fraction of xheight<\/td><\/tr>
words_default_fixed_limit<\/td>0.6<\/td>Allowed size variance<\/td><\/tr>
textord_words_definite_spread<\/td>0.3<\/td>Non-fuzzy spacing region<\/td><\/tr>
textord_spacesize_ratiofp<\/td>2.8<\/td>Min ratio space\/nonspace<\/td><\/tr>
textord_spacesize_ratioprop<\/td>2<\/td>Min ratio space\/nonspace<\/td><\/tr>
textord_fpiqr_ratio<\/td>1.5<\/td>Pitch IQR\/Gap IQR threshold<\/td><\/tr>
textord_max_pitch_iqr<\/td>0.2<\/td>Xh fraction noise in pitch<\/td><\/tr>
textord_fp_min_width<\/td>0.5<\/td>Min width of decent blobs<\/td><\/tr>
textord_underline_offset<\/td>0.1<\/td>Fraction of x to ignore<\/td><\/tr>
ambigs_debug_level<\/td>0<\/td>Debug level for unichar ambiguities<\/td><\/tr>
tessedit_single_match<\/td>0<\/td>Top choice only from CP<\/td><\/tr>
classify_debug_level<\/td>0<\/td>Classify debug level<\/td><\/tr>
classify_norm_method<\/td>1<\/td>Normalization Method …<\/td><\/tr>
matcher_debug_level<\/td>0<\/td>Matcher Debug Level<\/td><\/tr>
matcher_debug_flags<\/td>0<\/td>Matcher Debug Flags<\/td><\/tr>
classify_learning_debug_level<\/td>0<\/td>Learning Debug Level:<\/td><\/tr>
matcher_permanent_classes_min<\/td>1<\/td>Min # of permanent classes<\/td><\/tr>
matcher_min_examples_for_prototyping<\/td>3<\/td>Reliable Config Threshold<\/td><\/tr>
matcher_sufficient_examples_for_prototyping<\/td>5<\/td>Enable adaption even if the ambiguities have not been seen<\/td><\/tr>
classify_adapt_proto_threshold<\/td>230<\/td>Threshold for good protos during adaptive 0-255<\/td><\/tr>
classify_adapt_feature_threshold<\/td>230<\/td>Threshold for good features during adaptive 0-255<\/td><\/tr>
classify_class_pruner_threshold<\/td>229<\/td>Class Pruner Threshold 0-255<\/td><\/tr>
classify_class_pruner_multiplier<\/td>15<\/td>Class Pruner Multiplier 0-255:<\/td><\/tr>
classify_cp_cutoff_strength<\/td>7<\/td>Class Pruner CutoffStrength:<\/td><\/tr>
classify_integer_matcher_multiplier<\/td>10<\/td>Integer Matcher Multiplier 0-255:<\/td><\/tr>
il1_adaption_test<\/td>0<\/td>Dont adapt to i\/I at beginning of word<\/td><\/tr>
dawg_debug_level<\/td>0<\/td>Set to 1 for general debug info, to 2 for more details, to 3 to see all the debug messages<\/td><\/tr>
hyphen_debug_level<\/td>0<\/td>Debug level for hyphenated words.<\/td><\/tr>
max_viterbi_list_size<\/td>10<\/td>Maximum size of viterbi list.<\/td><\/tr>
stopper_smallword_size<\/td>2<\/td>Size of dict word to be treated as non-dict word<\/td><\/tr>
stopper_debug_level<\/td>0<\/td>Stopper debug level<\/td><\/tr>
tessedit_truncate_wordchoice_log<\/td>10<\/td>Max words to keep in list<\/td><\/tr>
fragments_debug<\/td>0<\/td>Debug character fragments<\/td><\/tr>
max_permuter_attempts<\/td>10000<\/td>Maximum number of different character choices to consider during permutation. This limit is especially useful when user patterns are specified, since overly generic patterns can result in dawg search exploring an overly large number of options.<\/td><\/tr>
repair_unchopped_blobs<\/td>1<\/td>Fix blobs that aren’t chopped<\/td><\/tr>
chop_debug<\/td>0<\/td>Chop debug<\/td><\/tr>
chop_split_length<\/td>10000<\/td>Split Length<\/td><\/tr>
chop_same_distance<\/td>2<\/td>Same distance<\/td><\/tr>
chop_min_outline_points<\/td>6<\/td>Min Number of Points on Outline<\/td><\/tr>
chop_seam_pile_size<\/td>150<\/td>Max number of seams in seam_pile<\/td><\/tr>
chop_inside_angle<\/td>-50<\/td>Min Inside Angle Bend<\/td><\/tr>
chop_min_outline_area<\/td>2000<\/td>Min Outline Area<\/td><\/tr>
chop_centered_maxwidth<\/td>90<\/td>Width of (smaller) chopped blobs above which we don’t care that a chop is not near the center.<\/td><\/tr>
chop_x_y_weight<\/td>3<\/td>X \/ Y length weight<\/td><\/tr>
segment_adjust_debug<\/td>0<\/td>Segmentation adjustment debug<\/td><\/tr>
wordrec_debug_level<\/td>0<\/td>Debug level for wordrec<\/td><\/tr>
wordrec_max_join_chunks<\/td>4<\/td>Max number of broken pieces to associate<\/td><\/tr>
segsearch_debug_level<\/td>0<\/td>SegSearch debug level<\/td><\/tr>
segsearch_max_pain_points<\/td>2000<\/td>Maximum number of pain points stored in the queue<\/td><\/tr>
segsearch_max_futile_classifications<\/td>20<\/td>Maximum number of pain point classifications per chunk thatdid not result in finding a better word choice.<\/td><\/tr>
language_model_debug_level<\/td>0<\/td>Language model debug level<\/td><\/tr>
language_model_ngram_order<\/td>8<\/td>Maximum order of the character ngram model<\/td><\/tr>
language_model_viterbi_list_max_num_prunable<\/td>10<\/td>Maximum number of prunable (those for which PrunablePath() is true) entries in each viterbi list recorded in BLOB_CHOICEs<\/td><\/tr>
language_model_viterbi_list_max_size<\/td>500<\/td>Maximum size of viterbi lists recorded in BLOB_CHOICEs<\/td><\/tr>
language_model_min_compound_length<\/td>3<\/td>Minimum length of compound words<\/td><\/tr>
wordrec_display_segmentations<\/td>0<\/td>Display Segmentations<\/td><\/tr>
tessedit_pageseg_mode<\/td>6<\/td>Page seg mode: 0=osd only, 1=auto+osd, 2=auto, 3=col, 4=block, 5=line, 6=word, 7=char (Values from PageSegMode enum in publictypes.h)<\/td><\/tr>
tessedit_ocr_engine_mode<\/td>0<\/td>Which OCR engine(s) to run (Tesseract, Cube, both). Defaults to loading and running only Tesseract (no Cube,no combiner). Values from OcrEngineMode enum in tesseractclass.h)<\/td><\/tr>
pageseg_devanagari_split_strategy<\/td>0<\/td>Whether to use the top-line splitting process for Devanagari documents while performing page-segmentation.<\/td><\/tr>
ocr_devanagari_split_strategy<\/td>0<\/td>Whether to use the top-line splitting process for Devanagari documents while performing ocr.<\/td><\/tr>
bidi_debug<\/td>0<\/td>Debug level for BiDi<\/td><\/tr>
applybox_debug<\/td>1<\/td>Debug level<\/td><\/tr>
applybox_page<\/td>0<\/td>Page number to apply boxes from<\/td><\/tr>
tessedit_bigram_debug<\/td>0<\/td>Amount of debug output for bigram correction.<\/td><\/tr>
debug_noise_removal<\/td>0<\/td>Debug reassignment of small outlines<\/td><\/tr>
noise_maxperblob<\/td>8<\/td>Max diacritics to apply to a blob<\/td><\/tr>
noise_maxperword<\/td>16<\/td>Max diacritics to apply to a word<\/td><\/tr>
debug_x_ht_level<\/td>0<\/td>Reestimate debug<\/td><\/tr>
quality_min_initial_alphas_reqd<\/td>2<\/td>alphas in a good word<\/td><\/tr>
tessedit_tess_adaption_mode<\/td>39<\/td>Adaptation decision algorithm for tess<\/td><\/tr>
tessedit_test_adaption_mode<\/td>3<\/td>Adaptation decision algorithm for tess<\/td><\/tr>
paragraph_debug_level<\/td>0<\/td>Print paragraph debug info.<\/td><\/tr>
cube_debug_level<\/td>0<\/td>Print cube debug info.<\/td><\/tr>
tessedit_preserve_min_wd_len<\/td>2<\/td>Only preserve wds longer than this<\/td><\/tr>
crunch_rating_max<\/td>10<\/td>For adj length in rating per ch<\/td><\/tr>
crunch_pot_indicators<\/td>1<\/td>How many potential indicators needed<\/td><\/tr>
crunch_leave_lc_strings<\/td>4<\/td>Dont crunch words with long lower case strings<\/td><\/tr>
crunch_leave_uc_strings<\/td>4<\/td>Dont crunch words with long lower case strings<\/td><\/tr>
crunch_long_repetitions<\/td>3<\/td>Crunch words with long repetitions<\/td><\/tr>
crunch_debug<\/td>0<\/td>As it says<\/td><\/tr>
fixsp_non_noise_limit<\/td>1<\/td>How many non-noise blbs either side?<\/td><\/tr>
fixsp_done_mode<\/td>1<\/td>What constitues done for spacing<\/td><\/tr>
debug_fix_space_level<\/td>0<\/td>Contextual fixspace debug<\/td><\/tr>
x_ht_acceptance_tolerance<\/td>8<\/td>Max allowed deviation of blob top outside of font data<\/td><\/tr>
x_ht_min_change<\/td>8<\/td>Min change in xht before actually trying it<\/td><\/tr>
superscript_debug<\/td>0<\/td>Debug level for sub and superscript fixer<\/td><\/tr>
suspect_level<\/td>99<\/td>Suspect marker level<\/td><\/tr>
suspect_space_level<\/td>100<\/td>Min suspect level for rejecting spaces<\/td><\/tr>
suspect_short_words<\/td>2<\/td>Dont Suspect dict wds longer than this<\/td><\/tr>
tessedit_reject_mode<\/td>0<\/td>Rejection algorithm<\/td><\/tr>
tessedit_image_border<\/td>2<\/td>Rej blbs near image edge limit<\/td><\/tr>
min_sane_x_ht_pixels<\/td>8<\/td>Reject any x-ht lt or eq than this<\/td><\/tr>
tessedit_page_number<\/td>-1<\/td>-1 -> All pages , else specifc page to process<\/td><\/tr>
tessdata_manager_debug_level<\/td>0<\/td>Debug level for TessdataManager functions.<\/td><\/tr>
tessedit_parallelize<\/td>0<\/td>Run in parallel where possible<\/td><\/tr>
tessedit_ok_mode<\/td>5<\/td>Acceptance decision algorithm<\/td><\/tr>
segment_debug<\/td>0<\/td>Debug the whole segmentation process<\/td><\/tr>
language_model_fixed_length_choices_depth<\/td>3<\/td>Depth of blob choice lists to explore when fixed length dawgs are on<\/td><\/tr>
tosp_debug_level<\/td>0<\/td>Debug data<\/td><\/tr>
tosp_enough_space_samples_for_median<\/td>3<\/td>or should we use mean<\/td><\/tr>
tosp_redo_kern_limit<\/td>10<\/td>No.samples reqd to reestimate for row<\/td><\/tr>
tosp_few_samples<\/td>40<\/td>No.gaps reqd with 1 large gap to treat as a table<\/td><\/tr>
tosp_short_row<\/td>20<\/td>No.gaps reqd with few cert spaces to use certs<\/td><\/tr>
tosp_sanity_method<\/td>1<\/td>How to avoid being silly<\/td><\/tr>
textord_max_noise_size<\/td>7<\/td>Pixel size of noise<\/td><\/tr>
textord_baseline_debug<\/td>0<\/td>Baseline debug level<\/td><\/tr>
textord_noise_sizefraction<\/td>10<\/td>Fraction of size for maxima<\/td><\/tr>
textord_noise_translimit<\/td>16<\/td>Transitions for normal blob<\/td><\/tr>
textord_noise_sncount<\/td>1<\/td>super norm blobs to save row<\/td><\/tr>
use_definite_ambigs_for_classifier<\/td>0<\/td>Use definite ambiguities when running character classifier<\/td><\/tr>
use_ambigs_for_adaption<\/td>0<\/td>Use ambigs for deciding whether to adapt to a character<\/td><\/tr>
allow_blob_division<\/td>1<\/td>Use divisible blobs chopping<\/td><\/tr>
prioritize_division<\/td>0<\/td>Prioritize blob division over chopping<\/td><\/tr>
classify_enable_learning<\/td>1<\/td>Enable adaptive classifier<\/td><\/tr>
tess_cn_matching<\/td>0<\/td>Character Normalized Matching<\/td><\/tr>
tess_bn_matching<\/td>0<\/td>Baseline Normalized Matching<\/td><\/tr>
classify_enable_adaptive_matcher<\/td>1<\/td>Enable adaptive classifier<\/td><\/tr>
classify_use_pre_adapted_templates<\/td>0<\/td>Use pre-adapted classifier templates<\/td><\/tr>
classify_save_adapted_templates<\/td>0<\/td>Save adapted templates to a file<\/td><\/tr>
classify_enable_adaptive_debugger<\/td>0<\/td>Enable match debugger<\/td><\/tr>
classify_nonlinear_norm<\/td>0<\/td>Non-linear stroke-density normalization<\/td><\/tr>
disable_character_fragments<\/td>1<\/td>Do not include character fragments in the results of the classifier<\/td><\/tr>
classify_debug_character_fragments<\/td>0<\/td>Bring up graphical debugging windows for fragments training<\/td><\/tr>
matcher_debug_separate_windows<\/td>0<\/td>Use two different windows for debugging the matching: One for the protos and one for the features.<\/td><\/tr>
classify_bln_numeric_mode<\/td>0<\/td>Assume the input is numbers [0-9].<\/td><\/tr>
load_system_dawg<\/td>1<\/td>Load system word dawg.<\/td><\/tr>
load_freq_dawg<\/td>1<\/td>Load frequent word dawg.<\/td><\/tr>
load_unambig_dawg<\/td>1<\/td>Load unambiguous word dawg.<\/td><\/tr>
load_punc_dawg<\/td>1<\/td>Load dawg with punctuation patterns.<\/td><\/tr>
load_number_dawg<\/td>1<\/td>Load dawg with number patterns.<\/td><\/tr>
load_bigram_dawg<\/td>1<\/td>Load dawg with special word bigrams.<\/td><\/tr>
use_only_first_uft8_step<\/td>0<\/td>Use only the first UTF8 step of the given string when computing log probabilities.<\/td><\/tr>
stopper_no_acceptable_choices<\/td>0<\/td>Make AcceptableChoice() always return false. Useful when there is a need to explore all segmentations<\/td><\/tr>
save_raw_choices<\/td>0<\/td>Deprecated- backward compatablity only<\/td><\/tr>
segment_nonalphabetic_script<\/td>0<\/td>Don’t use any alphabetic-specific tricks.Set to true in the traineddata config file for scripts that are cursive or inherently fixed-pitch<\/td><\/tr>
save_doc_words<\/td>0<\/td>Save Document Words<\/td><\/tr>
merge_fragments_in_matrix<\/td>1<\/td>Merge the fragments in the ratings matrix and delete them after merging<\/td><\/tr>
wordrec_no_block<\/td>0<\/td>Don’t output block information<\/td><\/tr>
wordrec_enable_assoc<\/td>1<\/td>Associator Enable<\/td><\/tr>
force_word_assoc<\/td>0<\/td>force associator to run regardless of what enable_assoc is.This is used for CJK where component grouping is necessary.<\/td><\/tr>
fragments_guide_chopper<\/td>0<\/td>Use information from fragments to guide chopping process<\/td><\/tr>
chop_enable<\/td>1<\/td>Chop enable<\/td><\/tr>
chop_vertical_creep<\/td>0<\/td>Vertical creep<\/td><\/tr>
chop_new_seam_pile<\/td>1<\/td>Use new seam_pile<\/td><\/tr>
assume_fixed_pitch_char_segment<\/td>0<\/td>include fixed-pitch heuristics in char segmentation<\/td><\/tr>
wordrec_skip_no_truth_words<\/td>0<\/td>Only run OCR for words that had truth recorded in BlamerBundle<\/td><\/tr>
wordrec_debug_blamer<\/td>0<\/td>Print blamer debug messages<\/td><\/tr>
wordrec_run_blamer<\/td>0<\/td>Try to set the blame for errors<\/td><\/tr>
save_alt_choices<\/td>1<\/td>Save alternative paths found during chopping and segmentation search<\/td><\/tr>
language_model_ngram_on<\/td>0<\/td>Turn on\/off the use of character ngram model<\/td><\/tr>
language_model_ngram_use_only_first_uft8_step<\/td>0<\/td>Use only the first UTF8 step of the given string when computing log probabilities.<\/td><\/tr>
language_model_ngram_space_delimited_language<\/td>1<\/td>Words are delimited by space<\/td><\/tr>
language_model_use_sigmoidal_certainty<\/td>0<\/td>Use sigmoidal score for certainty<\/td><\/tr>
tessedit_resegment_from_boxes<\/td>0<\/td>Take segmentation and labeling from box file<\/td><\/tr>
tessedit_resegment_from_line_boxes<\/td>0<\/td>Conversion of word\/line box file to char box file<\/td><\/tr>
tessedit_train_from_boxes<\/td>0<\/td>Generate training data from boxed chars<\/td><\/tr>
tessedit_make_boxes_from_boxes<\/td>0<\/td>Generate more boxes from boxed chars<\/td><\/tr>
tessedit_dump_pageseg_images<\/td>0<\/td>Dump intermediate images made during page segmentation<\/td><\/tr>
tessedit_ambigs_training<\/td>0<\/td>Perform training for ambiguities<\/td><\/tr>
tessedit_adaption_debug<\/td>0<\/td>Generate and print debug information for adaption<\/td><\/tr>
applybox_learn_chars_and_char_frags_mode<\/td>0<\/td>Learn both character fragments (as is done in the special low exposure mode) as well as unfragmented characters.<\/td><\/tr>
applybox_learn_ngrams_mode<\/td>0<\/td>Each bounding box is assumed to contain ngrams. Only learn the ngrams whose outlines overlap horizontally.<\/td><\/tr>
tessedit_display_outwords<\/td>0<\/td>Draw output words<\/td><\/tr>
tessedit_dump_choices<\/td>0<\/td>Dump char choices<\/td><\/tr>
tessedit_timing_debug<\/td>0<\/td>Print timing stats<\/td><\/tr>
tessedit_fix_fuzzy_spaces<\/td>1<\/td>Try to improve fuzzy spaces<\/td><\/tr>
tessedit_unrej_any_wd<\/td>0<\/td>Dont bother with word plausibility<\/td><\/tr>
tessedit_fix_hyphens<\/td>1<\/td>Crunch double hyphens?<\/td><\/tr>
tessedit_redo_xheight<\/td>1<\/td>Check\/Correct x-height<\/td><\/tr>
tessedit_enable_doc_dict<\/td>1<\/td>Add words to the document dictionary<\/td><\/tr>
tessedit_debug_fonts<\/td>0<\/td>Output font info per char<\/td><\/tr>
tessedit_debug_block_rejection<\/td>0<\/td>Block and Row stats<\/td><\/tr>
tessedit_enable_bigram_correction<\/td>1<\/td>Enable correction based on the word bigram dictionary.<\/td><\/tr>
tessedit_enable_dict_correction<\/td>0<\/td>Enable single word correction based on the dictionary.<\/td><\/tr>
enable_noise_removal<\/td>1<\/td>Remove and conditionally reassign small outlines when they confuse layout analysis, determining diacritics vs noise<\/td><\/tr>
debug_acceptable_wds<\/td>0<\/td>Dump word pass\/fail chk<\/td><\/tr>
tessedit_minimal_rej_pass1<\/td>0<\/td>Do minimal rejection on pass 1 output<\/td><\/tr>
tessedit_test_adaption<\/td>0<\/td>Test adaption criteria<\/td><\/tr>
tessedit_matcher_log<\/td>0<\/td>Log matcher activity<\/td><\/tr>
test_pt<\/td>0<\/td>Test for point<\/td><\/tr>
paragraph_text_based<\/td>1<\/td>Run paragraph detection on the post-text-recognition (more accurate)<\/td><\/tr>
docqual_excuse_outline_errs<\/td>0<\/td>Allow outline errs in unrejection?<\/td><\/tr>
tessedit_good_quality_unrej<\/td>1<\/td>Reduce rejection on good docs<\/td><\/tr>
tessedit_use_reject_spaces<\/td>1<\/td>Reject spaces?<\/td><\/tr>
tessedit_preserve_blk_rej_perfect_wds<\/td>1<\/td>Only rej partially rejected words in block rejection<\/td><\/tr>
tessedit_preserve_row_rej_perfect_wds<\/td>1<\/td>Only rej partially rejected words in row rejection<\/td><\/tr>
tessedit_dont_blkrej_good_wds<\/td>0<\/td>Use word segmentation quality metric<\/td><\/tr>
tessedit_dont_rowrej_good_wds<\/td>0<\/td>Use word segmentation quality metric<\/td><\/tr>
tessedit_row_rej_good_docs<\/td>1<\/td>Apply row rejection to good docs<\/td><\/tr>
tessedit_reject_bad_qual_wds<\/td>1<\/td>Reject all bad quality wds<\/td><\/tr>
tessedit_debug_doc_rejection<\/td>0<\/td>Page stats<\/td><\/tr>
tessedit_debug_quality_metrics<\/td>0<\/td>Output data to debug file<\/td><\/tr>
bland_unrej<\/td>0<\/td>unrej potential with no chekcs<\/td><\/tr>
unlv_tilde_crunching<\/td>1<\/td>Mark v.bad words for tilde crunch<\/td><\/tr>
hocr_font_info<\/td>0<\/td>Add font info to hocr output<\/td><\/tr>
crunch_early_merge_tess_fails<\/td>1<\/td>Before word crunch?<\/td><\/tr>
crunch_early_convert_bad_unlv_chs<\/td>0<\/td>Take out ~^ early?<\/td><\/tr>
crunch_terrible_garbage<\/td>1<\/td>As it says<\/td><\/tr>
crunch_pot_garbage<\/td>1<\/td>POTENTIAL crunch garbage<\/td><\/tr>
crunch_leave_ok_strings<\/td>1<\/td>Dont touch sensible strings<\/td><\/tr>
crunch_accept_ok<\/td>1<\/td>Use acceptability in okstring<\/td><\/tr>
crunch_leave_accept_strings<\/td>0<\/td>Dont pot crunch sensible strings<\/td><\/tr>
crunch_include_numerals<\/td>0<\/td>Fiddle alpha figures<\/td><\/tr>
tessedit_prefer_joined_punct<\/td>0<\/td>Reward punctation joins<\/td><\/tr>
tessedit_write_block_separators<\/td>0<\/td>Write block separators in output<\/td><\/tr>
tessedit_write_rep_codes<\/td>0<\/td>Write repetition char code<\/td><\/tr>
tessedit_write_unlv<\/td>0<\/td>Write .unlv output file<\/td><\/tr>
tessedit_create_txt<\/td>1<\/td>Write .txt output file<\/td><\/tr>
tessedit_create_hocr<\/td>0<\/td>Write .html hOCR output file<\/td><\/tr>
tessedit_create_pdf<\/td>0<\/td>Write .pdf output file<\/td><\/tr>
suspect_constrain_1Il<\/td>0<\/td>UNLV keep 1Il chars rejected<\/td><\/tr>
tessedit_minimal_rejection<\/td>0<\/td>Only reject tess failures<\/td><\/tr>
tessedit_zero_rejection<\/td>0<\/td>Dont reject ANYTHING<\/td><\/tr>
tessedit_word_for_word<\/td>0<\/td>Make output have exactly one word per WERD<\/td><\/tr>
tessedit_zero_kelvin_rejection<\/td>0<\/td>Dont reject ANYTHING AT ALL<\/td><\/tr>
tessedit_consistent_reps<\/td>1<\/td>Force all rep chars the same<\/td><\/tr>
tessedit_rejection_debug<\/td>0<\/td>Adaption debug<\/td><\/tr>
tessedit_flip_0O<\/td>1<\/td>Contextual 0O O0 flips<\/td><\/tr>
rej_trust_doc_dawg<\/td>0<\/td>Use DOC dawg in 11l conf. detector<\/td><\/tr>
rej_1Il_use_dict_word<\/td>0<\/td>Use dictword test<\/td><\/tr>
rej_1Il_trust_permuter_type<\/td>1<\/td>Dont double check<\/td><\/tr>
rej_use_tess_accepted<\/td>1<\/td>Individual rejection control<\/td><\/tr>
rej_use_tess_blanks<\/td>1<\/td>Individual rejection control<\/td><\/tr>
rej_use_good_perm<\/td>1<\/td>Individual rejection control<\/td><\/tr>
rej_use_sensible_wd<\/td>0<\/td>Extend permuter check<\/td><\/tr>
rej_alphas_in_number_perm<\/td>0<\/td>Extend permuter check<\/td><\/tr>
tessedit_create_boxfile<\/td>0<\/td>Output text with boxes<\/td><\/tr>
tessedit_write_images<\/td>0<\/td>Capture the image from the IPE<\/td><\/tr>
interactive_display_mode<\/td>0<\/td>Run interactively?<\/td><\/tr>
tessedit_override_permuter<\/td>1<\/td>According to dict_word<\/td><\/tr>
tessedit_use_primary_params_model<\/td>0<\/td>In multilingual mode use params model of the primary language<\/td><\/tr>
textord_tabfind_show_vlines<\/td>0<\/td>Debug line finding<\/td><\/tr>
textord_use_cjk_fp_model<\/td>0<\/td>Use CJK fixed pitch model<\/td><\/tr>
poly_allow_detailed_fx<\/td>0<\/td>Allow feature extractors to see the original outline<\/td><\/tr>
tessedit_init_config_only<\/td>0<\/td>Only initialize with the config file. Useful if the instance is not going to be used for OCR but say only for layout analysis.<\/td><\/tr>
textord_equation_detect<\/td>0<\/td>Turn on equation detector<\/td><\/tr>
textord_tabfind_vertical_text<\/td>1<\/td>Enable vertical detection<\/td><\/tr>
textord_tabfind_force_vertical_text<\/td>0<\/td>Force using vertical text page mode<\/td><\/tr>
preserve_interword_spaces<\/td>0<\/td>Preserve multiple interword spaces<\/td><\/tr>
include_page_breaks<\/td>0<\/td>Include page separator string in output text after each image\/page.<\/td><\/tr>
textord_tabfind_vertical_horizontal_mix<\/td>1<\/td>find horizontal lines such as headers in vertical page mode<\/td><\/tr>
load_fixed_length_dawgs<\/td>1<\/td>Load fixed length dawgs (e.g. for non-space delimited languages)<\/td><\/tr>
permute_debug<\/td>0<\/td>Debug char permutation process<\/td><\/tr>
permute_script_word<\/td>0<\/td>Turn on word script consistency permuter<\/td><\/tr>
segment_segcost_rating<\/td>0<\/td>incorporate segmentation cost in word rating?<\/td><\/tr>
permute_fixed_length_dawg<\/td>0<\/td>Turn on fixed-length phrasebook search permuter<\/td><\/tr>
permute_chartype_word<\/td>0<\/td>Turn on character type (property) consistency permuter<\/td><\/tr>
ngram_permuter_activated<\/td>0<\/td>Activate character-level n-gram-based permuter<\/td><\/tr>
permute_only_top<\/td>0<\/td>Run only the top choice permuter<\/td><\/tr>
use_new_state_cost<\/td>0<\/td>use new state cost heuristics for segmentation state evaluation<\/td><\/tr>
enable_new_segsearch<\/td>0<\/td>Enable new segmentation search path.<\/td><\/tr>
textord_single_height_mode<\/td>0<\/td>Script has no xheight, so use a single mode<\/td><\/tr>
tosp_old_to_method<\/td>0<\/td>Space stats use prechopping?<\/td><\/tr>
tosp_old_to_constrain_sp_kn<\/td>0<\/td>Constrain relative values of inter and intra-word gaps for old_to_method.<\/td><\/tr>
tosp_only_use_prop_rows<\/td>1<\/td>Block stats to use fixed pitch rows?<\/td><\/tr>
tosp_force_wordbreak_on_punct<\/td>0<\/td>Force word breaks on punct to break long lines in non-space delimited langs<\/td><\/tr>
tosp_use_pre_chopping<\/td>0<\/td>Space stats use prechopping?<\/td><\/tr>
tosp_old_to_bug_fix<\/td>0<\/td>Fix suspected bug in old code<\/td><\/tr>
tosp_block_use_cert_spaces<\/td>1<\/td>Only stat OBVIOUS spaces<\/td><\/tr>
tosp_row_use_cert_spaces<\/td>1<\/td>Only stat OBVIOUS spaces<\/td><\/tr>
tosp_narrow_blobs_not_cert<\/td>1<\/td>Only stat OBVIOUS spaces<\/td><\/tr>
tosp_row_use_cert_spaces1<\/td>1<\/td>Only stat OBVIOUS spaces<\/td><\/tr>
tosp_recovery_isolated_row_stats<\/td>1<\/td>Use row alone when inadequate cert spaces<\/td><\/tr>
tosp_only_small_gaps_for_kern<\/td>0<\/td>Better guess<\/td><\/tr>
tosp_all_flips_fuzzy<\/td>0<\/td>Pass ANY flip to context?<\/td><\/tr>
tosp_fuzzy_limit_all<\/td>1<\/td>Dont restrict kn->sp fuzzy limit to tables<\/td><\/tr>
tosp_stats_use_xht_gaps<\/td>1<\/td>Use within xht gap for wd breaks<\/td><\/tr>
tosp_use_xht_gaps<\/td>1<\/td>Use within xht gap for wd breaks<\/td><\/tr>
tosp_only_use_xht_gaps<\/td>0<\/td>Only use within xht gap for wd breaks<\/td><\/tr>
tosp_rule_9_test_punct<\/td>0<\/td>Dont chng kn to space next to punct<\/td><\/tr>
tosp_flip_fuzz_kn_to_sp<\/td>1<\/td>Default flip<\/td><\/tr>
tosp_flip_fuzz_sp_to_kn<\/td>1<\/td>Default flip<\/td><\/tr>
tosp_improve_thresh<\/td>0<\/td>Enable improvement heuristic<\/td><\/tr>
textord_no_rejects<\/td>0<\/td>Don’t remove noise blobs<\/td><\/tr>
textord_show_blobs<\/td>0<\/td>Display unsorted blobs<\/td><\/tr>
textord_show_boxes<\/td>0<\/td>Display unsorted blobs<\/td><\/tr>
textord_noise_rejwords<\/td>1<\/td>Reject noise-like words<\/td><\/tr>
textord_noise_rejrows<\/td>1<\/td>Reject noise-like rows<\/td><\/tr>
textord_noise_debug<\/td>0<\/td>Debug row garbage detector<\/td><\/tr>
m_data_sub_dir<\/td>tessdata\/<\/td>Directory for data files<\/td><\/tr>
tessedit_module_name<\/td>libtesseract304.dll<\/td>Module colocated with tessdata dir<\/td><\/tr>
classify_learn_debug_str<\/td><\/td>Class str to debug learning<\/td><\/tr>
user_words_file<\/td><\/td>A filename of user-provided words.<\/td><\/tr>
user_words_suffix<\/td><\/td>A suffix of user-provided words located in tessdata.<\/td><\/tr>
user_patterns_file<\/td><\/td>A filename of user-provided patterns.<\/td><\/tr>
user_patterns_suffix<\/td><\/td>A suffix of user-provided patterns located in tessdata.<\/td><\/tr>
output_ambig_words_file<\/td><\/td>Output file for ambiguities found in the dictionary<\/td><\/tr>
word_to_debug<\/td><\/td>Word for which stopper debug information should be printed to stdout<\/td><\/tr>
word_to_debug_lengths<\/td><\/td>Lengths of unichars in word_to_debug<\/td><\/tr>
tessedit_char_blacklist<\/td><\/td>Blacklist of chars not to recognize<\/td><\/tr>
tessedit_char_whitelist<\/td><\/td>Whitelist of chars to recognize<\/td><\/tr>
tessedit_char_unblacklist<\/td><\/td>List of chars to override tessedit_char_blacklist<\/td><\/tr>
tessedit_write_params_to_file<\/td><\/td>Write all parameters to the given file.<\/td><\/tr>
applybox_exposure_pattern<\/td>.exp<\/td>Exposure value follows this pattern in the image filename. The name of the image files are expected to be in the form [lang].[fontname].exp[num].tif<\/td><\/tr>
chs_leading_punct<\/td>(‘`”<\/td>Leading punctuation<\/td><\/tr>
chs_trailing_punct1<\/td>).,;:?!<\/td>1st Trailing punctuation<\/td><\/tr>
chs_trailing_punct2<\/td>)’`”<\/td>2nd Trailing punctuation<\/td><\/tr>
outlines_odd<\/td>%|<\/td>Non standard number of outlines<\/td><\/tr>
outlines_2<\/td>ij!?%”:;<\/td>Non standard number of outlines<\/td><\/tr>
numeric_punctuation<\/td>.,<\/td>Punct. chs expected WITHIN numbers<\/td><\/tr>
unrecognised_char<\/td>|<\/td>Output char for unidentified blobs<\/td><\/tr>
ok_repeated_ch_non_alphanum_wds<\/td>-?*=<\/td>Allow NN to unrej<\/td><\/tr>
conflict_set_I_l_1<\/td>Il1[]<\/td>Il1 conflict set<\/td><\/tr>
file_type<\/td>.tif<\/td>Filename extension<\/td><\/tr>
tessedit_load_sublangs<\/td><\/td>List of languages to load with this one<\/td><\/tr>
page_separator<\/td><\/td>Page separator (default is form feed control character)<\/td><\/tr>
classify_char_norm_range<\/td>0.2<\/td>Character Normalization Range …<\/td><\/tr>
classify_min_norm_scale_x<\/td>0<\/td>Min char x-norm scale …<\/td><\/tr>
classify_max_norm_scale_x<\/td>0.325<\/td>Max char x-norm scale …<\/td><\/tr>
classify_min_norm_scale_y<\/td>0<\/td>Min char y-norm scale …<\/td><\/tr>
classify_max_norm_scale_y<\/td>0.325<\/td>Max char y-norm scale …<\/td><\/tr>
classify_max_rating_ratio<\/td>1.5<\/td>Veto ratio between classifier ratings<\/td><\/tr>
classify_max_certainty_margin<\/td>5.5<\/td>Veto difference between classifier certainties<\/td><\/tr>
matcher_good_threshold<\/td>0.125<\/td>Good Match (0-1)<\/td><\/tr>
matcher_reliable_adaptive_result<\/td>0<\/td>Great Match (0-1)<\/td><\/tr>
matcher_perfect_threshold<\/td>0.02<\/td>Perfect Match (0-1)<\/td><\/tr>
matcher_bad_match_pad<\/td>0.15<\/td>Bad Match Pad (0-1)<\/td><\/tr>
matcher_rating_margin<\/td>0.1<\/td>New template margin (0-1)<\/td><\/tr>
matcher_avg_noise_size<\/td>12<\/td>Avg. noise blob length<\/td><\/tr>
matcher_clustering_max_angle_delta<\/td>0.015<\/td>Maximum angle delta for prototype clustering<\/td><\/tr>
classify_misfit_junk_penalty<\/td>0<\/td>Penalty to apply when a non-alnum is vertically out of its expected textline position<\/td><\/tr>
rating_scale<\/td>1.5<\/td>Rating scaling factor<\/td><\/tr>
certainty_scale<\/td>20<\/td>Certainty scaling factor<\/td><\/tr>
tessedit_class_miss_scale<\/td>0.00390625<\/td>Scale factor for features not used<\/td><\/tr>
classify_adapted_pruning_factor<\/td>2.5<\/td>Prune poor adapted results this much worse than best result<\/td><\/tr>
classify_adapted_pruning_threshold<\/td>-1<\/td>Threshold at which classify_adapted_pruning_factor starts<\/td><\/tr>
classify_character_fragments_garbage_certainty_threshold<\/td>-3<\/td>Exclude fragments that do not look like whole characters from training and adaption<\/td><\/tr>
speckle_large_max_size<\/td>0.3<\/td>Max large speckle size<\/td><\/tr>
speckle_rating_penalty<\/td>10<\/td>Penalty to add to worst rating for noise<\/td><\/tr>
xheight_penalty_subscripts<\/td>0.125<\/td>Score penalty (0.1 = 10%) added if there are subscripts or superscripts in a word, but it is otherwise OK.<\/td><\/tr>
xheight_penalty_inconsistent<\/td>0.25<\/td>Score penalty (0.1 = 10%) added if an xheight is inconsistent.<\/td><\/tr>
segment_penalty_dict_frequent_word<\/td>1<\/td>Score multiplier for word matches which have good case andare frequent in the given language (lower is better).<\/td><\/tr>
segment_penalty_dict_case_ok<\/td>1.1<\/td>Score multiplier for word matches that have good case (lower is better).<\/td><\/tr>
segment_penalty_dict_case_bad<\/td>1.3125<\/td>Default score multiplier for word matches, which may have case issues (lower is better).<\/td><\/tr>
segment_penalty_ngram_best_choice<\/td>1.24<\/td>Multipler to for the best choice from the ngram model.<\/td><\/tr>
segment_penalty_dict_nonword<\/td>1.25<\/td>Score multiplier for glyph fragment segmentations which do not match a dictionary word (lower is better).<\/td><\/tr>
segment_penalty_garbage<\/td>1.5<\/td>Score multiplier for poorly cased strings that are not in the dictionary and generally look like garbage (lower is better).<\/td><\/tr>
certainty_scale<\/td>20<\/td>Certainty scaling factor<\/td><\/tr>
stopper_nondict_certainty_base<\/td>-2.5<\/td>Certainty threshold for non-dict words<\/td><\/tr>
stopper_phase2_certainty_rejection_offset<\/td>1<\/td>Reject certainty offset<\/td><\/tr>
stopper_certainty_per_char<\/td>-0.5<\/td>Certainty to add for each dict char above small word size.<\/td><\/tr>
stopper_allowable_character_badness<\/td>3<\/td>Max certaintly variation allowed in a word (in sigma)<\/td><\/tr>
doc_dict_pending_threshold<\/td>0<\/td>Worst certainty for using pending dictionary<\/td><\/tr>
doc_dict_certainty_threshold<\/td>-2.25<\/td>Worst certainty for words that can be inserted into thedocument dictionary<\/td><\/tr>
wordrec_worst_state<\/td>1<\/td>Worst segmentation state<\/td><\/tr>
tessedit_certainty_threshold<\/td>-2.25<\/td>Good blob limit<\/td><\/tr>
chop_split_dist_knob<\/td>0.5<\/td>Split length adjustment<\/td><\/tr>
chop_overlap_knob<\/td>0.9<\/td>Split overlap adjustment<\/td><\/tr>
chop_center_knob<\/td>0.15<\/td>Split center adjustment<\/td><\/tr>
chop_sharpness_knob<\/td>0.06<\/td>Split sharpness adjustment<\/td><\/tr>
chop_width_change_knob<\/td>5<\/td>Width change adjustment<\/td><\/tr>
chop_ok_split<\/td>100<\/td>OK split limit<\/td><\/tr>
chop_good_split<\/td>50<\/td>Good split limit<\/td><\/tr>
segsearch_max_char_wh_ratio<\/td>2<\/td>Maximum character width-to-height ratio<\/td><\/tr>
language_model_ngram_small_prob<\/td>1e-006<\/td>To avoid overly small denominators use this as the floor of the probability returned by the ngram model.<\/td><\/tr>
language_model_ngram_nonmatch_score<\/td>-40<\/td>Average classifier score of a non-matching unichar.<\/td><\/tr>
language_model_ngram_scale_factor<\/td>0.03<\/td>Strength of the character ngram model relative to the character classifier<\/td><\/tr>
language_model_ngram_rating_factor<\/td>16<\/td>Factor to bring log-probs into the same range as ratings when multiplied by outline length<\/td><\/tr>
language_model_penalty_non_freq_dict_word<\/td>0.1<\/td>Penalty for words not in the frequent word dictionary<\/td><\/tr>
language_model_penalty_non_dict_word<\/td>0.15<\/td>Penalty for non-dictionary words<\/td><\/tr>
language_model_penalty_punc<\/td>0.2<\/td>Penalty for inconsistent punctuation<\/td><\/tr>
language_model_penalty_case<\/td>0.1<\/td>Penalty for inconsistent case<\/td><\/tr>
language_model_penalty_script<\/td>0.5<\/td>Penalty for inconsistent script<\/td><\/tr>
language_model_penalty_chartype<\/td>0.3<\/td>Penalty for inconsistent character type<\/td><\/tr>
language_model_penalty_font<\/td>0<\/td>Penalty for inconsistent font<\/td><\/tr>
language_model_penalty_spacing<\/td>0.05<\/td>Penalty for inconsistent spacing<\/td><\/tr>
language_model_penalty_increment<\/td>0.01<\/td>Penalty increment<\/td><\/tr>
noise_cert_basechar<\/td>-8<\/td>Hingepoint for base char certainty<\/td><\/tr>
noise_cert_disjoint<\/td>-1<\/td>Hingepoint for disjoint certainty<\/td><\/tr>
noise_cert_punc<\/td>-3<\/td>Threshold for new punc char certainty<\/td><\/tr>
noise_cert_factor<\/td>0.375<\/td>Scaling on certainty diff from Hingepoint<\/td><\/tr>
quality_rej_pc<\/td>0.08<\/td>good_quality_doc lte rejection limit<\/td><\/tr>
quality_blob_pc<\/td>0<\/td>good_quality_doc gte good blobs limit<\/td><\/tr>
quality_outline_pc<\/td>1<\/td>good_quality_doc lte outline error limit<\/td><\/tr>
quality_char_pc<\/td>0.95<\/td>good_quality_doc gte good char limit<\/td><\/tr>
test_pt_x<\/td>100000<\/td>xcoord<\/td><\/tr>
test_pt_y<\/td>100000<\/td>ycoord<\/td><\/tr>
tessedit_reject_doc_percent<\/td>65<\/td>%rej allowed before rej whole doc<\/td><\/tr>
tessedit_reject_block_percent<\/td>45<\/td>%rej allowed before rej whole block<\/td><\/tr>
tessedit_reject_row_percent<\/td>40<\/td>%rej allowed before rej whole row<\/td><\/tr>
tessedit_whole_wd_rej_row_percent<\/td>70<\/td>Number of row rejects in whole word rejectswhich prevents whole row rejection<\/td><\/tr>
tessedit_good_doc_still_rowrej_wd<\/td>1.1<\/td>rej good doc wd if more than this fraction rejected<\/td><\/tr>
quality_rowrej_pc<\/td>1.1<\/td>good_quality_doc gte good char limit<\/td><\/tr>
crunch_terrible_rating<\/td>80<\/td>crunch rating lt this<\/td><\/tr>
crunch_poor_garbage_cert<\/td>-9<\/td>crunch garbage cert lt this<\/td><\/tr>
crunch_poor_garbage_rate<\/td>60<\/td>crunch garbage rating lt this<\/td><\/tr>
crunch_pot_poor_rate<\/td>40<\/td>POTENTIAL crunch rating lt this<\/td><\/tr>
crunch_pot_poor_cert<\/td>-8<\/td>POTENTIAL crunch cert lt this<\/td><\/tr>
crunch_del_rating<\/td>60<\/td>POTENTIAL crunch rating lt this<\/td><\/tr>
crunch_del_cert<\/td>-10<\/td>POTENTIAL crunch cert lt this<\/td><\/tr>
crunch_del_min_ht<\/td>0.7<\/td>Del if word ht lt xht x this<\/td><\/tr>
crunch_del_max_ht<\/td>3<\/td>Del if word ht gt xht x this<\/td><\/tr>
crunch_del_min_width<\/td>3<\/td>Del if word width lt xht x this<\/td><\/tr>
crunch_del_high_word<\/td>1.5<\/td>Del if word gt xht x this above bl<\/td><\/tr>
crunch_del_low_word<\/td>0.5<\/td>Del if word gt xht x this below bl<\/td><\/tr>
crunch_small_outlines_size<\/td>0.6<\/td>Small if lt xht x this<\/td><\/tr>
fixsp_small_outlines_size<\/td>0.28<\/td>Small if lt xht x this<\/td><\/tr>
superscript_worse_certainty<\/td>2<\/td>How many times worse certainty does a superscript position glyph need to be for us to try classifying it as a char with a different baseline?<\/td><\/tr>
superscript_bettered_certainty<\/td>0.97<\/td>What reduction in badness do we think sufficient to choose a superscript over what we’d thought. For example, a value of 0.6 means we want to reduce badness of certainty by at least 40%<\/td><\/tr>
superscript_scaledown_ratio<\/td>0.4<\/td>A superscript scaled down more than this is unbelievably small. For example, 0.3 means we expect the font size to be no smaller than 30% of the text line font size.<\/td><\/tr>
subscript_max_y_top<\/td>0.5<\/td>Maximum top of a character measured as a multiple of x-height above the baseline for us to reconsider whether it’s a subscript.<\/td><\/tr>
superscript_min_y_bottom<\/td>0.3<\/td>Minimum bottom of a character measured as a multiple of x-height above the baseline for us to reconsider whether it’s a superscript.<\/td><\/tr>
suspect_rating_per_ch<\/td>999.9<\/td>Dont touch bad rating limit<\/td><\/tr>
suspect_accept_rating<\/td>-999.9<\/td>Accept good rating limit<\/td><\/tr>
tessedit_lower_flip_hyphen<\/td>1.5<\/td>Aspect ratio dot\/hyphen test<\/td><\/tr>
tessedit_upper_flip_hyphen<\/td>1.8<\/td>Aspect ratio dot\/hyphen test<\/td><\/tr>
rej_whole_of_mostly_reject_word_fract<\/td>0.85<\/td>if >this fract<\/td><\/tr>
min_orientation_margin<\/td>7<\/td>Min acceptable orientation margin<\/td><\/tr>
textord_tabfind_vertical_text_ratio<\/td>0.5<\/td>Fraction of textlines deemed vertical to use vertical page mode<\/td><\/tr>
textord_tabfind_aligned_gap_fraction<\/td>0.75<\/td>Fraction of height used as a minimum gap for aligned blobs.<\/td><\/tr>
bestrate_pruning_factor<\/td>2<\/td>Multiplying factor of current best rate to prune other hypotheses<\/td><\/tr>
segment_reward_script<\/td>0.95<\/td>Score multipler for script consistency within a word. Being a ‘reward’ factor, it should be \u2264 1. Smaller value implies bigger reward.<\/td><\/tr>
segment_reward_chartype<\/td>0.97<\/td>Score multipler for char type consistency within a word.<\/td><\/tr>
segment_reward_ngram_best_choice<\/td>0.99<\/td>Score multipler for ngram permuter’s best choice (only used in the Han script path).<\/td><\/tr>
heuristic_segcost_rating_base<\/td>1.25<\/td>base factor for adding segmentation cost into word rating.It’s a multiplying factor, the larger the value above 1, the bigger the effect of segmentation cost.<\/td><\/tr>
heuristic_weight_rating<\/td>1<\/td>weight associated with char rating in combined cost ofstate<\/td><\/tr>
heuristic_weight_width<\/td>1000<\/td>weight associated with width evidence in combined cost of state<\/td><\/tr>
heuristic_weight_seamcut<\/td>0<\/td>weight associated with seam cut in combined cost of state<\/td><\/tr>
heuristic_max_char_wh_ratio<\/td>2<\/td>max char width-to-height ratio allowed in segmentation<\/td><\/tr>
segsearch_max_fixed_pitch_char_wh_ratio<\/td>2<\/td>Maximum character width-to-height ratio for fixed-pitch fonts<\/td><\/tr>
tosp_old_sp_kn_th_factor<\/td>2<\/td>Factor for defining space threshold in terms of space and kern sizes<\/td><\/tr>
tosp_threshold_bias1<\/td>0<\/td>how far between kern and space?<\/td><\/tr>
tosp_threshold_bias2<\/td>0<\/td>how far between kern and space?<\/td><\/tr>
tosp_narrow_fraction<\/td>0.3<\/td>Fract of xheight for narrow<\/td><\/tr>
tosp_narrow_aspect_ratio<\/td>0.48<\/td>narrow if w\/h less than this<\/td><\/tr>
tosp_wide_fraction<\/td>0.52<\/td>Fract of xheight for wide<\/td><\/tr>
tosp_wide_aspect_ratio<\/td>0<\/td>wide if w\/h less than this<\/td><\/tr>
tosp_fuzzy_space_factor<\/td>0.6<\/td>Fract of xheight for fuzz sp<\/td><\/tr>
tosp_fuzzy_space_factor1<\/td>0.5<\/td>Fract of xheight for fuzz sp<\/td><\/tr>
tosp_fuzzy_space_factor2<\/td>0.72<\/td>Fract of xheight for fuzz sp<\/td><\/tr>
tosp_gap_factor<\/td>0.83<\/td>gap ratio to flip sp->kern<\/td><\/tr>
tosp_kern_gap_factor1<\/td>2<\/td>gap ratio to flip kern->sp<\/td><\/tr>
tosp_kern_gap_factor2<\/td>1.3<\/td>gap ratio to flip kern->sp<\/td><\/tr>
tosp_kern_gap_factor3<\/td>2.5<\/td>gap ratio to flip kern->sp<\/td><\/tr>
tosp_ignore_big_gaps<\/td>-1<\/td>xht multiplier<\/td><\/tr>
tosp_ignore_very_big_gaps<\/td>3.5<\/td>xht multiplier<\/td><\/tr>
tosp_rep_space<\/td>1.6<\/td>rep gap multiplier for space<\/td><\/tr>
tosp_enough_small_gaps<\/td>0.65<\/td>Fract of kerns reqd for isolated row stats<\/td><\/tr>
tosp_table_kn_sp_ratio<\/td>2.25<\/td>Min difference of kn and sp in table<\/td><\/tr>
tosp_table_xht_sp_ratio<\/td>0.33<\/td>Expect spaces bigger than this<\/td><\/tr>
tosp_table_fuzzy_kn_sp_ratio<\/td>3<\/td>Fuzzy if less than this<\/td><\/tr>
tosp_fuzzy_kn_fraction<\/td>0.5<\/td>New fuzzy kn alg<\/td><\/tr>
tosp_fuzzy_sp_fraction<\/td>0.5<\/td>New fuzzy sp alg<\/td><\/tr>
tosp_min_sane_kn_sp<\/td>1.5<\/td>Dont trust spaces less than this time kn<\/td><\/tr>
tosp_init_guess_kn_mult<\/td>2.2<\/td>Thresh guess – mult kn by this<\/td><\/tr>
tosp_init_guess_xht_mult<\/td>0.28<\/td>Thresh guess – mult xht by this<\/td><\/tr>
tosp_max_sane_kn_thresh<\/td>5<\/td>Multiplier on kn to limit thresh<\/td><\/tr>
tosp_flip_caution<\/td>0<\/td>Dont autoflip kn to sp when large separation<\/td><\/tr>
tosp_large_kerning<\/td>0.19<\/td>Limit use of xht gap with large kns<\/td><\/tr>
tosp_dont_fool_with_small_kerns<\/td>-1<\/td>Limit use of xht gap with odd small kns<\/td><\/tr>
tosp_near_lh_edge<\/td>0<\/td>Dont reduce box if the top left is non blank<\/td><\/tr>
tosp_silly_kn_sp_gap<\/td>0.2<\/td>Dont let sp minus kn get too small<\/td><\/tr>
tosp_pass_wide_fuzz_sp_to_context<\/td>0.75<\/td>How wide fuzzies need context<\/td><\/tr>
textord_blob_size_bigile<\/td>95<\/td>Percentile for large blobs<\/td><\/tr>
textord_noise_area_ratio<\/td>0.7<\/td>Fraction of bounding box for noise<\/td><\/tr>
textord_blob_size_smallile<\/td>20<\/td>Percentile for small blobs<\/td><\/tr>
textord_initialx_ile<\/td>0.75<\/td>Ile of sizes for xheight guess<\/td><\/tr>
textord_initialasc_ile<\/td>0.9<\/td>Ile of sizes for xheight guess<\/td><\/tr>
textord_noise_sizelimit<\/td>0.5<\/td>Fraction of x for big t count<\/td><\/tr>
textord_noise_normratio<\/td>2<\/td>Dot to norm ratio for deletion<\/td><\/tr>
textord_noise_syfract<\/td>0.2<\/td>xh fract height error for norm blobs<\/td><\/tr>
textord_noise_sxfract<\/td>0.4<\/td>xh fract width error for norm blobs<\/td><\/tr>
textord_noise_hfract<\/td>0.015625<\/td>Height fraction to discard outlines as speckle noise<\/td><\/tr>
textord_noise_rowratio<\/td>6<\/td>Dot to norm ratio for deletion<\/td><\/tr>
textord_blshift_maxshift<\/td>0<\/td>Max baseline shift<\/td><\/tr>
textord_blshift_xfraction<\/td>9.99<\/td>Min size of baseline shift<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n
\n\n\n\n

List of available languages in Tesseract which can be installed using the command sudo apt install tesseract-ocr-langcode<\/code><\/p>\n\n\n\n

Lang Code<\/th>Language<\/th>4.0 traineddata<\/th><\/tr><\/thead>
afr<\/td>Afrikaans<\/td>afr.traineddata<\/a><\/td><\/tr>
amh<\/td>Amharic<\/td>amh.traineddata<\/a><\/td><\/tr>
ara<\/td>Arabic<\/td>ara.traineddata<\/a><\/td><\/tr>
asm<\/td>Assamese<\/td>asm.traineddata<\/a><\/td><\/tr>
aze<\/td>Azerbaijani<\/td>aze.traineddata<\/a><\/td><\/tr>
aze_cyrl<\/td>Azerbaijani – Cyrillic<\/td>aze_cyrl.traineddata<\/a><\/td><\/tr>
bel<\/td>Belarusian<\/td>bel.traineddata<\/a><\/td><\/tr>
ben<\/td>Bengali<\/td>ben.traineddata<\/a><\/td><\/tr>
bod<\/td>Tibetan<\/td>bod.traineddata<\/a><\/td><\/tr>
bos<\/td>Bosnian<\/td>bos.traineddata<\/a><\/td><\/tr>
bul<\/td>Bulgarian<\/td>bul.traineddata<\/a><\/td><\/tr>
cat<\/td>Catalan; Valencian<\/td>cat.traineddata<\/a><\/td><\/tr>
ceb<\/td>Cebuano<\/td>ceb.traineddata<\/a><\/td><\/tr>
ces<\/td>Czech<\/td>ces.traineddata<\/a><\/td><\/tr>
chi_sim<\/td>Chinese – Simplified<\/td>chi_sim.traineddata<\/a><\/td><\/tr>
chi_tra<\/td>Chinese – Traditional<\/td>chi_tra.traineddata<\/a><\/td><\/tr>
chr<\/td>Cherokee<\/td>chr.traineddata<\/a><\/td><\/tr>
cym<\/td>Welsh<\/td>cym.traineddata<\/a><\/td><\/tr>
dan<\/td>Danish<\/td>dan.traineddata<\/a><\/td><\/tr>
deu<\/td>German<\/td>deu.traineddata<\/a><\/td><\/tr>
dzo<\/td>Dzongkha<\/td>dzo.traineddata<\/a><\/td><\/tr>
ell<\/td>Greek, Modern (1453-)<\/td>ell.traineddata<\/a><\/td><\/tr>
eng<\/td>English<\/td>eng.traineddata<\/a><\/td><\/tr>
enm<\/td>English, Middle (1100-1500)<\/td>enm.traineddata<\/a><\/td><\/tr>
epo<\/td>Esperanto<\/td>epo.traineddata<\/a><\/td><\/tr>
est<\/td>Estonian<\/td>est.traineddata<\/a><\/td><\/tr>
eus<\/td>Basque<\/td>eus.traineddata<\/a><\/td><\/tr>
fas<\/td>Persian<\/td>fas.traineddata<\/a><\/td><\/tr>
fin<\/td>Finnish<\/td>fin.traineddata<\/a><\/td><\/tr>
fra<\/td>French<\/td>fra.traineddata<\/a><\/td><\/tr>
frk<\/td>German Fraktur<\/td>frk.traineddata<\/a><\/td><\/tr>
frm<\/td>French, Middle (ca. 1400-1600)<\/td>frm.traineddata<\/a><\/td><\/tr>
gle<\/td>Irish<\/td>gle.traineddata<\/a><\/td><\/tr>
glg<\/td>Galician<\/td>glg.traineddata<\/a><\/td><\/tr>
grc<\/td>Greek, Ancient (-1453)<\/td>grc.traineddata<\/a><\/td><\/tr>
guj<\/td>Gujarati<\/td>guj.traineddata<\/a><\/td><\/tr>
hat<\/td>Haitian; Haitian Creole<\/td>hat.traineddata<\/a><\/td><\/tr>
heb<\/td>Hebrew<\/td>heb.traineddata<\/a><\/td><\/tr>
hin<\/td>Hindi<\/td>hin.traineddata<\/a><\/td><\/tr>
hrv<\/td>Croatian<\/td>hrv.traineddata<\/a><\/td><\/tr>
hun<\/td>Hungarian<\/td>hun.traineddata<\/a><\/td><\/tr>
iku<\/td>Inuktitut<\/td>iku.traineddata<\/a><\/td><\/tr>
ind<\/td>Indonesian<\/td>ind.traineddata<\/a><\/td><\/tr>
isl<\/td>Icelandic<\/td>isl.traineddata<\/a><\/td><\/tr>
ita<\/td>Italian<\/td>ita.traineddata<\/a><\/td><\/tr>
ita_old<\/td>Italian – Old<\/td>ita_old.traineddata<\/a><\/td><\/tr>
jav<\/td>Javanese<\/td>jav.traineddata<\/a><\/td><\/tr>
jpn<\/td>Japanese<\/td>jpn.traineddata<\/a><\/td><\/tr>
kan<\/td>Kannada<\/td>kan.traineddata<\/a><\/td><\/tr>
kat<\/td>Georgian<\/td>kat.traineddata<\/a><\/td><\/tr>
kat_old<\/td>Georgian – Old<\/td>kat_old.traineddata<\/a><\/td><\/tr>
kaz<\/td>Kazakh<\/td>kaz.traineddata<\/a><\/td><\/tr>
khm<\/td>Central Khmer<\/td>khm.traineddata<\/a><\/td><\/tr>
kir<\/td>Kirghiz; Kyrgyz<\/td>kir.traineddata<\/a><\/td><\/tr>
kor<\/td>Korean<\/td>kor.traineddata<\/a><\/td><\/tr>
kur<\/td>Kurdish<\/td>kur.traineddata<\/a><\/td><\/tr>
lao<\/td>Lao<\/td>lao.traineddata<\/a><\/td><\/tr>
lat<\/td>Latin<\/td>lat.traineddata<\/a><\/td><\/tr>
lav<\/td>Latvian<\/td>lav.traineddata<\/a><\/td><\/tr>
lit<\/td>Lithuanian<\/td>lit.traineddata<\/a><\/td><\/tr>
mal<\/td>Malayalam<\/td>mal.traineddata<\/a><\/td><\/tr>
mar<\/td>Marathi<\/td>mar.traineddata<\/a><\/td><\/tr>
mkd<\/td>Macedonian<\/td>mkd.traineddata<\/a><\/td><\/tr>
mlt<\/td>Maltese<\/td>mlt.traineddata<\/a><\/td><\/tr>
msa<\/td>Malay<\/td>msa.traineddata<\/a><\/td><\/tr>
mya<\/td>Burmese<\/td>mya.traineddata<\/a><\/td><\/tr>
nep<\/td>Nepali<\/td>nep.traineddata<\/a><\/td><\/tr>
nld<\/td>Dutch; Flemish<\/td>nld.traineddata<\/a><\/td><\/tr>
nor<\/td>Norwegian<\/td>nor.traineddata<\/a><\/td><\/tr>
ori<\/td>Oriya<\/td>ori.traineddata<\/a><\/td><\/tr>
pan<\/td>Panjabi; Punjabi<\/td>pan.traineddata<\/a><\/td><\/tr>
pol<\/td>Polish<\/td>pol.traineddata<\/a><\/td><\/tr>
por<\/td>Portuguese<\/td>por.traineddata<\/a><\/td><\/tr>
pus<\/td>Pushto; Pashto<\/td>pus.traineddata<\/a><\/td><\/tr>
ron<\/td>Romanian; Moldavian; Moldovan<\/td>ron.traineddata<\/a><\/td><\/tr>
rus<\/td>Russian<\/td>rus.traineddata<\/a><\/td><\/tr>
san<\/td>Sanskrit<\/td>san.traineddata<\/a><\/td><\/tr>
sin<\/td>Sinhala; Sinhalese<\/td>sin.traineddata<\/a><\/td><\/tr>
slk<\/td>Slovak<\/td>slk.traineddata<\/a><\/td><\/tr>
slv<\/td>Slovenian<\/td>slv.traineddata<\/a><\/td><\/tr>
spa<\/td>Spanish; Castilian<\/td>spa.traineddata<\/a><\/td><\/tr>
spa_old<\/td>Spanish; Castilian – Old<\/td>spa_old.traineddata<\/a><\/td><\/tr>
sqi<\/td>Albanian<\/td>sqi.traineddata<\/a><\/td><\/tr>
srp<\/td>Serbian<\/td>srp.traineddata<\/a><\/td><\/tr>
srp_latn<\/td>Serbian – Latin<\/td>srp_latn.traineddata<\/a><\/td><\/tr>
swa<\/td>Swahili<\/td>swa.traineddata<\/a><\/td><\/tr>
swe<\/td>Swedish<\/td>swe.traineddata<\/a><\/td><\/tr>
syr<\/td>Syriac<\/td>syr.traineddata<\/a><\/td><\/tr>
tam<\/td>Tamil<\/td>tam.traineddata<\/a><\/td><\/tr>
tel<\/td>Telugu<\/td>tel.traineddata<\/a><\/td><\/tr>
tgk<\/td>Tajik<\/td>tgk.traineddata<\/a><\/td><\/tr>
tgl<\/td>Tagalog<\/td>tgl.traineddata<\/a><\/td><\/tr>
tha<\/td>Thai<\/td>tha.traineddata<\/a><\/td><\/tr>
tir<\/td>Tigrinya<\/td>tir.traineddata<\/a><\/td><\/tr>
tur<\/td>Turkish<\/td>tur.traineddata<\/a><\/td><\/tr>
uig<\/td>Uighur; Uyghur<\/td>uig.traineddata<\/a><\/td><\/tr>
ukr<\/td>Ukrainian<\/td>ukr.traineddata<\/a><\/td><\/tr>
urd<\/td>Urdu<\/td>urd.traineddata<\/a><\/td><\/tr>
uzb<\/td>Uzbek<\/td>uzb.traineddata<\/a><\/td><\/tr>
uzb_cyrl<\/td>Uzbek – Cyrillic<\/td>uzb_cyrl.traineddata<\/a><\/td><\/tr>
vie<\/td>Vietnamese<\/td>vie.traineddata<\/a><\/td><\/tr>
yid<\/td>Yiddish<\/td>yid.traineddata<\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n
\n\n\n\n

References:<\/p>\n\n\n\n

https:\/\/github.com\/tesseract-ocr\/tesseract<\/a>
https:\/\/tesseract-ocr.github.io\/<\/a>
https:\/\/pypi.org\/project\/pytesseract\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

This is for my reference and this might come in handy for others too. All Tesseract options CLI Examples Command Example Notes tesseract sample_images\/image2.jpg stdout To print the output to standard output tesseract sample_images\/image2.jpg sample_images\/output By default the output will be named outbase.txt. tesseract sample_images\/image2.jpg sample_images\/output -l eng -l is for language. English is default […]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[24,38],"tags":[46,47],"_links":{"self":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts\/1391"}],"collection":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/comments?post=1391"}],"version-history":[{"count":2,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts\/1391\/revisions"}],"predecessor-version":[{"id":1843,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts\/1391\/revisions\/1843"}],"wp:attachment":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/media?parent=1391"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/categories?post=1391"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/tags?post=1391"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}