脚本专栏 
首页 > 脚本专栏 > 浏览文章

Python运行报错UnicodeDecodeError的解决方法

(编辑:jimmy 日期: 2024/11/19 浏览:3 次 )

Python2.7在Windows上有一个bug,运行报错:

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 33: ordinal not in range(128)

解决方案如下:

编辑Python27\Lib\mimetypes.py文件,全选,替换为以下patch后的正确脚本,或者直接依据此patch修改:

"""Guess the MIME type of a file.
 
This module defines two useful functions:
 
guess_type(url, strict=1) -- guess the MIME type and encoding of a URL.
 
guess_extension(type, strict=1) -- guess the extension for a given MIME type.
 
It also contains the following, for tuning the behavior:
 
Data:
 
knownfiles -- list of files to parse
inited -- flag set when init() has been called
suffix_map -- dictionary mapping suffixes to suffixes
encodings_map -- dictionary mapping suffixes to encodings
types_map -- dictionary mapping suffixes to types
 
Functions:
 
init([files]) -- parse a list of files, default knownfiles (on Windows, the
 default values are taken from the registry)
read_mime_types(file) -- parse one file, return a dictionary or None
"""
from itertools import count
 
import os
import sys
import posixpath
import urllib
try:
 import _winreg
except ImportError:
 _winreg = None
 
__all__ = [
 "guess_type","guess_extension","guess_all_extensions",
 "add_type","read_mime_types","init"
]
 
knownfiles = [
 "/etc/mime.types",
 "/etc/httpd/mime.types",     # Mac OS X
 "/etc/httpd/conf/mime.types",    # Apache
 "/etc/apache/mime.types",     # Apache 1
 "/etc/apache2/mime.types",     # Apache 2
 "/usr/local/etc/httpd/conf/mime.types",
 "/usr/local/lib/netscape/mime.types",
 "/usr/local/etc/httpd/conf/mime.types",  # Apache 1.2
 "/usr/local/etc/mime.types",    # Apache 1.3
 ]
 
inited = False
_db = None
 
 
class MimeTypes:
 """MIME-types datastore.
 
 This datastore can handle information from mime.types-style files
 and supports basic determination of MIME type from a filename or
 URL, and can guess a reasonable extension given a MIME type.
 """
 
 def __init__(self, filenames=(), strict=True):
  if not inited:
   init()
  self.encodings_map = encodings_map.copy()
  self.suffix_map = suffix_map.copy()
  self.types_map = ({}, {}) # dict for (non-strict, strict)
  self.types_map_inv = ({}, {})
  for (ext, type) in types_map.items():
   self.add_type(type, ext, True)
  for (ext, type) in common_types.items():
   self.add_type(type, ext, False)
  for name in filenames:
   self.read(name, strict)
 
 def add_type(self, type, ext, strict=True):
  """Add a mapping between a type and an extension.
 
  When the extension is already known, the new
  type will replace the old one. When the type
  is already known the extension will be added
  to the list of known extensions.
 
  If strict is true, information will be added to
  list of standard types, else to the list of non-standard
  types.
  """
  self.types_map[strict][ext] = type
  exts = self.types_map_inv[strict].setdefault(type, [])
  if ext not in exts:
   exts.append(ext)
 
 def guess_type(self, url, strict=True):
  """Guess the type of a file based on its URL.
 
  Return value is a tuple (type, encoding) where type is None if
  the type can't be guessed (no or unknown suffix) or a string
  of the form type/subtype, usable for a MIME Content-type
  header; and encoding is None for no encoding or the name of
  the program used to encode (e.g. compress or gzip). The
  mappings are table driven. Encoding suffixes are case
  sensitive; type suffixes are first tried case sensitive, then
  case insensitive.
 
  The suffixes .tgz, .taz and .tz (case sensitive!) are all
  mapped to '.tar.gz'. (This is table-driven too, using the
  dictionary suffix_map.)
 
  Optional `strict' argument when False adds a bunch of commonly found,
  but non-standard types.
  """
  scheme, url = urllib.splittype(url)
  if scheme == 'data':
   # syntax of data URLs:
   # dataurl := "data:" [ mediatype ] [ ";base64" ] "," data
   # mediatype := [ type "/" subtype ] *( ";" parameter )
   # data  := *urlchar
   # parameter := attribute "=" value
   # type/subtype defaults to "text/plain"
   comma = url.find(',')
   if comma < 0:
    # bad data URL
    return None, None
   semi = url.find(';', 0, comma)
   if semi >= 0:
    type = url[:semi]
   else:
    type = url[:comma]
   if '=' in type or '/' not in type:
    type = 'text/plain'
   return type, None   # never compressed, so encoding is None
  base, ext = posixpath.splitext(url)
  while ext in self.suffix_map:
   base, ext = posixpath.splitext(base + self.suffix_map[ext])
  if ext in self.encodings_map:
   encoding = self.encodings_map[ext]
   base, ext = posixpath.splitext(base)
  else:
   encoding = None
  types_map = self.types_map[True]
  if ext in types_map:
   return types_map[ext], encoding
  elif ext.lower() in types_map:
   return types_map[ext.lower()], encoding
  elif strict:
   return None, encoding
  types_map = self.types_map[False]
  if ext in types_map:
   return types_map[ext], encoding
  elif ext.lower() in types_map:
   return types_map[ext.lower()], encoding
  else:
   return None, encoding
 
 def guess_all_extensions(self, type, strict=True):
  """Guess the extensions for a file based on its MIME type.
 
  Return value is a list of strings giving the possible filename
  extensions, including the leading dot ('.'). The extension is not
  guaranteed to have been associated with any particular data stream,
  but would be mapped to the MIME type `type' by guess_type().
 
  Optional `strict' argument when false adds a bunch of commonly found,
  but non-standard types.
  """
  type = type.lower()
  extensions = self.types_map_inv[True].get(type, [])
  if not strict:
   for ext in self.types_map_inv[False].get(type, []):
    if ext not in extensions:
     extensions.append(ext)
  return extensions
 
 def guess_extension(self, type, strict=True):
  """Guess the extension for a file based on its MIME type.
 
  Return value is a string giving a filename extension,
  including the leading dot ('.'). The extension is not
  guaranteed to have been associated with any particular data
  stream, but would be mapped to the MIME type `type' by
  guess_type(). If no extension can be guessed for `type', None
  is returned.
 
  Optional `strict' argument when false adds a bunch of commonly found,
  but non-standard types.
  """
  extensions = self.guess_all_extensions(type, strict)
  if not extensions:
   return None
  return extensions[0]
 
 def read(self, filename, strict=True):
  """
  Read a single mime.types-format file, specified by pathname.
 
  If strict is true, information will be added to
  list of standard types, else to the list of non-standard
  types.
  """
  with open(filename) as fp:
   self.readfp(fp, strict)
 
 def readfp(self, fp, strict=True):
  """
  Read a single mime.types-format file.
 
  If strict is true, information will be added to
  list of standard types, else to the list of non-standard
  types.
  """
  while 1:
   line = fp.readline()
   if not line:
    break
   words = line.split()
   for i in range(len(words)):
    if words[i][0] == '#':
     del words[i:]
     break
   if not words:
    continue
   type, suffixes = words[0], words[1:]
   for suff in suffixes:
    self.add_type(type, '.' + suff, strict)
 
 def read_windows_registry(self, strict=True):
  """
  Load the MIME types database from Windows registry.
 
  If strict is true, information will be added to
  list of standard types, else to the list of non-standard
  types.
  """
 
  # Windows only
  if not _winreg:
   return
 
  def enum_types(mimedb):
   for i in count():
    try:
     yield _winreg.EnumKey(mimedb, i)
    except EnvironmentError:
     break
 
  default_encoding = sys.getdefaultencoding()
  with _winreg.OpenKey(_winreg.HKEY_CLASSES_ROOT, '') as hkcr:
   for subkeyname in enum_types(hkcr):
    try:
     with _winreg.OpenKey(hkcr, subkeyname) as subkey:
      # Only check file extensions
      if not subkeyname.startswith("."):
       continue
      # raises EnvironmentError if no 'Content Type' value
      mimetype, datatype = _winreg.QueryValueEx(
       subkey, 'Content Type')
      if datatype != _winreg.REG_SZ:
       continue
      try:
       mimetype = mimetype.encode(default_encoding)
       subkeyname = subkeyname.encode(default_encoding)
      except UnicodeEncodeError:
       continue
      self.add_type(mimetype, subkeyname, strict)
    except EnvironmentError:
     continue
 
def guess_type(url, strict=True):
 """Guess the type of a file based on its URL.
 
 Return value is a tuple (type, encoding) where type is None if the
 type can't be guessed (no or unknown suffix) or a string of the
 form type/subtype, usable for a MIME Content-type header; and
 encoding is None for no encoding or the name of the program used
 to encode (e.g. compress or gzip). The mappings are table
 driven. Encoding suffixes are case sensitive; type suffixes are
 first tried case sensitive, then case insensitive.
 
 The suffixes .tgz, .taz and .tz (case sensitive!) are all mapped
 to ".tar.gz". (This is table-driven too, using the dictionary
 suffix_map).
 
 Optional `strict' argument when false adds a bunch of commonly found, but
 non-standard types.
 """
 if _db is None:
  init()
 return _db.guess_type(url, strict)
 
 
def guess_all_extensions(type, strict=True):
 """Guess the extensions for a file based on its MIME type.
 
 Return value is a list of strings giving the possible filename
 extensions, including the leading dot ('.'). The extension is not
 guaranteed to have been associated with any particular data
 stream, but would be mapped to the MIME type `type' by
 guess_type(). If no extension can be guessed for `type', None
 is returned.
 
 Optional `strict' argument when false adds a bunch of commonly found,
 but non-standard types.
 """
 if _db is None:
  init()
 return _db.guess_all_extensions(type, strict)
 
def guess_extension(type, strict=True):
 """Guess the extension for a file based on its MIME type.
 
 Return value is a string giving a filename extension, including the
 leading dot ('.'). The extension is not guaranteed to have been
 associated with any particular data stream, but would be mapped to the
 MIME type `type' by guess_type(). If no extension can be guessed for
 `type', None is returned.
 
 Optional `strict' argument when false adds a bunch of commonly found,
 but non-standard types.
 """
 if _db is None:
  init()
 return _db.guess_extension(type, strict)
 
def add_type(type, ext, strict=True):
 """Add a mapping between a type and an extension.
 
 When the extension is already known, the new
 type will replace the old one. When the type
 is already known the extension will be added
 to the list of known extensions.
 
 If strict is true, information will be added to
 list of standard types, else to the list of non-standard
 types.
 """
 if _db is None:
  init()
 return _db.add_type(type, ext, strict)
 
 
def init(files=None):
 global suffix_map, types_map, encodings_map, common_types
 global inited, _db
 inited = True # so that MimeTypes.__init__() doesn't call us again
 db = MimeTypes()
 if files is None:
  if _winreg:
   db.read_windows_registry()
  files = knownfiles
 for file in files:
  if os.path.isfile(file):
   db.read(file)
 encodings_map = db.encodings_map
 suffix_map = db.suffix_map
 types_map = db.types_map[True]
 common_types = db.types_map[False]
 # Make the DB a global variable now that it is fully initialized
 _db = db
 
 
def read_mime_types(file):
 try:
  f = open(file)
 except IOError:
  return None
 db = MimeTypes()
 db.readfp(f, True)
 return db.types_map[True]
 
 
def _default_mime_types():
 global suffix_map
 global encodings_map
 global types_map
 global common_types
 
 suffix_map = {
  '.tgz': '.tar.gz',
  '.taz': '.tar.gz',
  '.tz': '.tar.gz',
  '.tbz2': '.tar.bz2',
  '.txz': '.tar.xz',
  }
 
 encodings_map = {
  '.gz': 'gzip',
  '.Z': 'compress',
  '.bz2': 'bzip2',
  '.xz': 'xz',
  }
 
 # Before adding new types, make sure they are either registered with IANA,
 # at http://www.isi.edu/in-notes/iana/assignments/media-types
 # or extensions, i.e. using the x- prefix
 
 # If you add to these, please keep them sorted!
 types_map = {
  '.a'  : 'application/octet-stream',
  '.ai'  : 'application/postscript',
  '.aif' : 'audio/x-aiff',
  '.aifc' : 'audio/x-aiff',
  '.aiff' : 'audio/x-aiff',
  '.au'  : 'audio/basic',
  '.avi' : 'video/x-msvideo',
  '.bat' : 'text/plain',
  '.bcpio' : 'application/x-bcpio',
  '.bin' : 'application/octet-stream',
  '.bmp' : 'image/x-ms-bmp',
  '.c'  : 'text/plain',
  # Duplicates """Usage: mimetypes.py [options] type
 
Options:
 --help / -h  -- print this message and exit
 --lenient / -l -- additionally search of some common, but non-standard
       types.
 --extension / -e -- guess extension instead of type
 
More than one type argument may be given.
"""
 
 def usage(code, msg=''):
  print USAGE
  if msg: print msg
  sys.exit(code)
 
 try:
  opts, args = getopt.getopt(sys.argv[1:], 'hle',
         ['help', 'lenient', 'extension'])
 except getopt.error, msg:
  usage(1, msg)
 
 strict = 1
 extension = 0
 for opt, arg in opts:
  if opt in ('-h', '--help'):
   usage(0)
  elif opt in ('-l', '--lenient'):
   strict = 0
  elif opt in ('-e', '--extension'):
   extension = 1
 for gtype in args:
  if extension:
   guess = guess_extension(gtype, strict)
   if not guess: print "I don't know anything about type", gtype
   else: print guess
  else:
   guess, encoding = guess_type(gtype, strict)
   if not guess: print "I don't know anything about type", gtype
   else: print 'type:', guess, 'encoding:', encoding

附上一篇关于python编码的帖子

1. pyhton的所有内置库、方法接受的是unicode编码的字符串。

2. str.decode 函数就是转成unicode编码,所以能decode的字符串传进python的内置库、函数都能正确运行。

3.问题在于这个decode函数解码时到底要传哪个参数:utf-8,gbk,gb2312......等N种编码。参数不当,就会抛类似异常:

复制代码 代码如下:UnicodeDecodeError: 'gbk' codec can't decode bytes in position 2-3: illegal multibyte sequence

   UnicodeDecodeError: 'utf8' codec can't decode bytes in position 0-1: invalid data

下面举个例子:

#coding:utf-8 
#指定本文件编码为utf8 
import os 
# 以下为示例代码,不一定能运行。随意写的,无编译运行过。 
# 例子以XP平台为例,因为linux平台编码(UTF-8)与window平台(GBK)不一样。 
# 假设D盘下面有很多中文名称文件 
filelist = os.listdir(r"d:\\") # 此处返回的list中的中文是以GBK编码的,你可以通过查看cmd窗口属性看到。 
for path in filelist: 
 if os.path.isdir(path): continue 
  fp = open(path.decode("GBK") , 'rb') # 如果此处用 path.decode("UTF-8") 就会抛异常,原因是wind的dir命令返回的是GBK编码 
 print len(fp.read()) 
  fp.close() 
filepath =r"d:\\中文文件.doc"    # 假设此文存在,记得要带中文 
fp = open(filepath.decode('utf-8'), "rb") #这里使用utf8参数进行解码,原因是文件头里有句coding: utf-8 
print len(fp.read()) 
fp.close() 
path2 = u"d:\\中文文件.doc" # 假如这里有个u在前面,这个变量就是unicode编码了,不用解码。 
fp = open(path2, 'rb') 
print len(fp.read()) 
fp.close() 

上一篇:PyCharm使用教程之搭建Python开发环境
下一篇:Python使用Pycrypto库进行RSA加密的方法详解
一句话新闻
一文看懂荣耀MagicBook Pro 16
荣耀猎人回归!七大亮点看懂不只是轻薄本,更是游戏本的MagicBook Pro 16.
人们对于笔记本电脑有一个固有印象:要么轻薄但性能一般,要么性能强劲但笨重臃肿。然而,今年荣耀新推出的MagicBook Pro 16刷新了人们的认知——发布会上,荣耀宣布猎人游戏本正式回归,称其继承了荣耀 HUNTER 基因,并自信地为其打出“轻薄本,更是游戏本”的口号。
众所周知,寻求轻薄本的用户普遍更看重便携性、外观造型、静谧性和打字办公等用机体验,而寻求游戏本的用户则普遍更看重硬件配置、性能释放等硬核指标。把两个看似难以相干的产品融合到一起,我们不禁对它产生了强烈的好奇:作为代表荣耀猎人游戏本的跨界新物种,它究竟做了哪些平衡以兼顾不同人群的各类需求呢?