pointless

  • 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle





  • Some caveats, though: To share the same home folder safely, it’s best to use the same desktop environment on both distros. Debian paired with Fedora makes it difficult to match the release numbers of the desktops, though, and there might be discrepancies with respect to user config files in the home folder, when you’re trying to configure features in Fedora that aren’t yet available in Debian.

    Also the system folder setup (locations of libraries and include files) is different between the two, so if there’s anything in the home folder that’s linked against libraries in one distro, it won’t work in the other. Especially if you’re going to compile anything in the home folder – including stuff that package managers of scripting languages like lua and python themselves compile – that could lead to major heaadaches.


  • I don’t think it does virtual desktops with labwc still; but when it does, labwc is as good a replacement for xfwm as any, IMHO.

    labwc can do virtual desktops; there’s a desktop switcher, and the window switcher is aware of windows only in the current desktop – but I can’t figure out how to query window-per-desktop information programmatically otherwise. waybar, wlrctl, as well as xfce-panel don’t seem to have access to that info either. Still waiting for accomodations with respect to some wayland extension, I suppose.


  • Ubuntu’s font rendering used to be better than every other distro, because they incorporated patches on freetype that were legally ‘iffy’ as to whether they infringed on microsoft’s patents; later whatever exclusivity requirement that there was with those patents expired, and the patches got upstreamed in freetype itself.

    So now all Linux desktops are capable of subpixel font rendering, hinting, whatever. But before that, font rendering really was hideous on other distros.







  • PyMuPDF is excellent for extracting ‘structured’ text from a pdf page — though I believe ‘pulling out relevant information’ will still be a manual task, UNLESS the text you’re working with allows parsing into meaningful units.

    That’s because ‘textual’ content in a pdf is nothing other than a bunch of instructions to draw glyphs inside a rect that represents a page; utilities that come with mupdf or poppler arrange those glyphs (not always perfectly) into ‘blocks’, ‘lines’, and ‘words’ based solely on whitespace separation; the programmer who uses those utilities in an end-user facing application then has to figure out how to create the illusion (so to speak) that the user is selecting/copying/searching for paragraphs, sentences, and so on, in proper reading order.

    PyMuPDF comes with a rich collection of convenience functions to make all that less painful; like dehyphenation, eliminating superfluous whitespace, etc. but still, need some further processing to pick out humanly relevant info.

    Built-in regex capabilities of Python can suffice for that parsing; but if not, you might want to look into NLTK tools, which apply sophisticated methods to tokenize words & sentences.

    EDIT: I really should’ve mentioned some proper full text search tools. Once you have a good plaintext representation of a pdf page, you might want to feed that representation into tools like the following to index them properly for relevant info:

    https://lunr.readthedocs.io/en/latest/ – this is easy to use, & set up, esp. in a python project.

    … it’s based on principles that are put to use in this full-scale, ‘industrial strength’ full text search engine: https://solr.apache.org/ – it’s a bit of a pain to set up; but python can interface with it through any http client. Once you set up some kind of mapping between search tokens/keywords/tags, the plaintext page, & the actual pdf, you can get from a phrase search, for example, to a bunch of vector graphics (i.e. the pdf) relatively painlessly.


  • Another vote for Tesseract – just to clarify the terminology, though: PDF is a fragile format best used read-only; so you really don’t want to edit a pdf, but make a new one using the same (or cleaned-up) bitmaps and a new ocr text layer.

    Now, tesseract is excellent at recognizing glyphs; but especially if the scanned image is a little fuzzy, the layout detection falters; and when it falters, you get redundant line breaks, & chunks of text in the wrong order – all of which gets incredibly annoying for searching & copying purposes. So if you can spare the time, and the text requires it, you may need to mark regions (paragraphs & titles mainly) on the bitmap image manually. There exist a few frontends to Tesseract that help with a task like that; check out, e.g., https://github.com/manisandro/gImageReader - inside single paragraph blocks of text, Tesseract doesn’t get as easily confused; and the text output is in the correct reading order, & w/o redundant breaks.