Double decoded text/html parts (was: [PATCH] test: Add test for searching of uncommonly encoded messages)

Michal Sojka sojkam1 at fel.cvut.cz
Sun Feb 26 01:33:16 PST 2012


On Sat, 25 Feb 2012, Serge Z wrote:
> 
> Hi!
> I've struck another problem:
> 
> I've got an html/text email with body encoded with cp1251.
> Its encoding is mentioned in both Content-type: email header and html <meta>
> tag. So when the client tries to display it with external html2text converter,
> The message is decoded twice: first by client, second by html2text (I
> use w3m).

Right. After my analysis of the problem (see below) it seems there is no
trivial solution for this.
 
> As I understand, notmuch (while indexing this message) decodes it once and
> index it in the right way (though including html tags to index). But what if
> the message contains no "charset" option in Content-Type email header but
> contain <meta> content-type tag with charset noted?

This should not happen. It violates RFC 2046, section 4.1.2.

> Should such message be considered as being composed wrong or it should
> be indexed with diving into html details (content-type)?

I don't think it's wrongly composed and it should be even correctly
indexed (with my patch). The problem is when you view such a message
with an external HTML viewer.

In my mailbox I can find two different types of text/html parts. First,
the parts that contain complete HTML document including all headers and
especially <meta http-equiv="content-type" content="text/html; ...">.
Such parts could be passed to external HTML viewer without any decoding
by notmuch.

The second type is text/html part that does not contain any HTML
headers. Passing such a part to an external HTML viewer undecoded would
require it to guess the correct charset from the content.

AFAIK Firefox users can set fallback charset (used for HTML documents
with unknown charset) in the preferences, but I don't know what other
browsers would do. In particular, do you know how w3m behaves when
charset is not specified?

In any way, if we want notmuch to do the right thing, we should analyze
the content of text/html parts and decide whether to decode the part or
not. Perhaps, a simple heuristic could be to search the content of the
part for strings "charset=" and "encoding=" and if any is found, notmuch
wouldn't decode that part. Otherwise it will decode it according to
Content-Type header.

As a curiosity, I found the following in one of my emails. Note that two
different encodings (iso-8859-2 and windows-1250) are specified at the
same time :) That's the reason why I think that fixing the problem won't
be trivial.

Content-Type: text/html; charset="iso-8859-2"
Content-Transfer-Encoding: 8bit

<?xml version="1.0" encoding="windows-1250" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-2" />

Cheers,
-Michal


More information about the notmuch mailing list