config: fix rounding error when calculating background color with alpha

We use pre-multiplied alpha color channels, but were having bad
rounding errors due to the alpha divider being truncated to an
integer.

The algorithm for pre-multiplying a color channel is:

  alpha_divider = 0xffff / alpha
  pre_mult_color = color / alpha_divider

In order to fix the rounding errors, we could turn ‘alpha_divider’
into a double.

That however would introduce a performance penalty since now we’d need
to do floating point math for each cell.

The algorithm can be trivially converted to:

  pre_mult_color = color * alpha / 0xffff

Since both color and alpa values are < 65536, the multiplication is
“safe”; it will not overflow an uint32_t.
This commit is contained in:
Daniel Eklöf 2020-12-20 12:48:06 +01:00
parent 946678c7ef
commit dee61e6239
No known key found for this signature in database
GPG key ID: 5BBD4992C116573F

View file

@ -53,16 +53,12 @@ conf_to_color(const struct yml_node *node)
uint16_t blue = hex_byte(&hex[4]); uint16_t blue = hex_byte(&hex[4]);
uint16_t alpha = hex_byte(&hex[6]); uint16_t alpha = hex_byte(&hex[6]);
if (alpha == 0)
return (pixman_color_t){0, 0, 0, 0};
alpha |= alpha << 8; alpha |= alpha << 8;
int alpha_div = 0xffff / alpha;
return (pixman_color_t){ return (pixman_color_t){
.red = (red << 8 | red) / alpha_div, .red = (red << 8 | red) * alpha / 0xffff,
.green = (green << 8 | green) / alpha_div, .green = (green << 8 | green) * alpha / 0xffff,
.blue = (blue << 8 | blue) / alpha_div, .blue = (blue << 8 | blue) * alpha / 0xffff,
.alpha = alpha, .alpha = alpha,
}; };
} }