• Valmond@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    The artists also took that into account, used it in their favour.

    Source: worked with pixel artists early 2000.

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      The artists did and the engineers did.

      For example, the Apple II achieved 16 onscreen colors via NTSC artifacting. The 8-Bit Guy did a great video on this; programmers could choose like 4 colors, but if you put them next to each other in certain combinations they would turn other colors. Which is why white text would turn green and/or purple at the edges.

      The IBM CGA card took it to a whole other level; it had a 4-bit digital RGBI video port for computer monitors and a composite port for televisions. When plugged into an RGBI monitor, you got a sharp picture that would display in one of four four-color palettes: black, white, cyan and magenta, or black, yellow, red and green, both in bright and dark. But if the artist dithered the graphics properly, and the card was plugged into a composite TV or monitor, the same graphics would appear softer, but in 16 colors. Text was harder to read, but games looked better, so business customers could buy an RGBI monitor and gamers could use a TV.

      In the 16-bit era, I can cite titles on the SNES and Genesis that took advantage of the limitations of the NTSC standard to get graphical tricks done that the console couldn’t actually do. Like transparent water. I think it’s in Emerald Hill Zone in Sonic the Hedgehog 2, and in some levels of Super Mario World, where you can enter and exit water that is drawn by rapidly jiggling a dithered pattern back and forth. On a CRT television this blurs into a translucent effect, when viewed on an entirely digital monitor it looks like an opaque checkerboard or grille.