%PDF- <> %âãÏÓ endobj 2 0 obj <> endobj 3 0 obj <>/ExtGState<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/Annots[ 28 0 R 29 0 R] /MediaBox[ 0 0 595.5 842.25] /Contents 4 0 R/Group<>/Tabs/S>> endobj ºaâÚÎΞ-ÌE1ÍØÄ÷{òò2ÿ ÛÖ^ÔÀá TÎ{¦?§®¥kuµùÕ5sLOšuY>endobj 2 0 obj<>endobj 2 0 obj<>endobj 2 0 obj<>endobj 2 0 obj<> endobj 2 0 obj<>endobj 2 0 obj<>es 3 0 R>> endobj 2 0 obj<> ox[ 0.000000 0.000000 609.600000 935.600000]/Fi endobj 3 0 obj<> endobj 7 1 obj<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI]>>/Subtype/Form>> stream
TL;DR: Run update_unicode.sh after the publication of a new Unicode standard and commit the resulting unicode_widths.h file. The long version ================ The Git source code ships the file unicode_widths.h which contains tables of zero and double width Unicode code points, respectively. These tables are generated using update_unicode.sh in this directory. update_unicode.sh itself uses a third-party tool, uniset, to query two Unicode data files for the interesting code points. On first run, update_unicode.sh clones uniset from Github and builds it. This requires a current-ish version of autoconf (2.69 works per December 2016). On each run, update_unicode.sh checks whether more recent Unicode data files are available from the Unicode consortium, and rebuilds the header unicode_widths.h with the new data. The new header can then be committed.