Problem was that I usually only discovered the issue when I went to read the book lol
Blind geek, fanfiction lover (Harry Potter and MLP). Mastodon at: @[email protected].
Problem was that I usually only discovered the issue when I went to read the book lol
I never did that, my connection was too slow to want to take up someone’s DCC slot for like a day to get an entire movie. Remember all the frustrating idiots who would share .lit files, but forget to remove the DRM from them?
Ah, good to know. Back in my day, when we had to walk a hundred miles to school in the snow, up hill both ways, IRC was the only place to get ebooks. I’m guessing it’s just the old users clinging on now.
Man, I’m getting flashbacks to my days running omenserve on undernet. I had no idea people were still doing this! How does the content compare to places like Anna’s archive these days?
Also, if you don’t feel comfortable building bookworm from source yourself, and you feel like you can trust me, Here’s a build of the latest bookworm code from github for 64-bit Windows: https://www.sendspace.com/pro/dl/rd388d
If you use Bookworm and use the built-in support for espeak, you can get up to 600 words per minute or so. Dectalk can go well over 900 words per minute. As far as I know, cocoa tops out at around 500 words per minute. So all of the options accept piper should be fine for you.
It really depends on your use case. If you want something that sounds pretty okay, and is decently fast, Piper fits the bill. However, this is just a command line TTS system; you’ll need to build all the supporting infrastructure if you want it to read audiobooks. https://github.com/rhasspy/piper
An extension for the free and open source NVDA screen reader to use piper lives here: https://github.com/mush42/piper-nvda
If you want something that can run in realtime, though sounds somewhat robotic, you want dectalk. This repo comes with libraries and dlls, as well as several sample applications. Note, however, that the licensing status of this code is…uh…dubious to say the least. Dectalk was abandonware for years, and the developer leaked the sourcecode on a mailing list in the 2000’s. However, ownership of the code was recently re-established, and Dectalk is now a commercial product once again. But the new owners haven’t come after the repo yet: https://github.com/dectalk/dectalk
If you want a robotic but realtime voice that’s fully FOSS with known licensing status, you want espeak-ng: https://github.com/espeak-ng/espeak-ng
If you want a fully fledged software application to read things to you, but don’t need a screen reader and don’t want to build scripts yourself, you want bookworm: https://github.com/blindpandas/bookworm
Note, however, that you should build bookworm from source. While the author accepts pull requests, because of his circumstances, he’s no longer able to build new releases: https://github.com/blindpandas/bookworm/discussions/224
If you are okay with using closed-source freeware, Balabolka is another way to go to get a full text to speech reader: https://www.cross-plus-a.com/balabolka.htm
Apparently! I don’t hide my data in any way, and constantly get ads in languages I don’t speak. Usually French, but sometimes Hindi or Chinese. And as a blind person myself, I’m not sure that my well paid full time job working in large enterprise and big tech accessibility is altruism deserving of thanks haha.
I assume it’s because I live in Canada, and big American data just assumes all Canadians speak French. I regularly get French ads on English websites.
I don’t block anything. I work in accessibility, so it’s important to me to know what the experiences are like for my fellow users with disabilities. I also don’t want to recommend sites or apps that are riddled with inaccessible ads. I’d rather not give them traffic at all. Though even though I let them track me, I still get ads in a language I don’t speak for cars I can’t drive. What’re they doing with all that data?
Good to know; thanks! I’ll keep an eye on it.
I was having issues with outgoing federation to Mastodon on 0.19.0. I just did the update five minutes ago, so we’ll see if that fixes it. If you’re seeing this comment I guess it’s working at the moment.
A couple reasons, I think:
AI dubbing: this makes it way easier for YouTube to add secondary dubbed tracks to videos in multiple languages. Based on the Google push to add AI into everything, including creating AI related OKR’s, that’s probably a primary driver. Multiple audio tracks is just needed infrastructure to add AI dubbing.
Audio description: Google is fighting enough antitrust related legal battles right now. The fact that YouTube doesn’t support audio description for those of us who are blind has been an issue for a long time, and now that basically every other video streaming service supports it, I suspect they’re starting to feel increased pressure to get on board. Once again, multiple audio tracks is needed infrastructure for offering audio description.
Surprised nobody has mentioned my two favourites:
Most of the other stuff I listen to is either industry specific or fandom/hobby specific.
There’s also a list here, though last updated in 2020: https://distributedcomputing.info/projects.html
Most of those projects remain active in some form.