问题描述
我确实很难与伐木有关.我想在一定时间后以及达到一定大小后翻转日志.
一段时间后, TimedRotatingFileHandler
,并由 RotatingFileHandler
进行滚动. /p>
但是TimedRotatingFileHandler
没有属性maxBytes
,并且RotatingFileHandler
在特定时间段后不能旋转.我还尝试将两个处理程序都添加到记录器中,但结果使记录倍增.
我想念什么吗?
我还研究了logging.handlers
的源代码.我试图对TimedRotatingFileHandler
进行子类化,并覆盖方法shouldRollover()
来创建具有以下两种功能的类:
class EnhancedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0, maxBytes=0):
""" This is just a combination of TimedRotatingFileHandler and RotatingFileHandler (adds maxBytes to TimedRotatingFileHandler) """
# super(self). #It's old style class, so super doesn't work.
logging.handlers.TimedRotatingFileHandler.__init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0)
self.maxBytes=maxBytes
def shouldRollover(self, record):
"""
Determine if rollover should occur.
Basically, see if the supplied record would cause the file to exceed
the size limit we have.
we are also comparing times
"""
if self.stream is None: # delay was set...
self.stream = self._open()
if self.maxBytes > 0: # are we rolling over?
msg = "%s\n" % self.format(record)
self.stream.seek(0, 2) #due to non-posix-compliant Windows feature
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
t = int(time.time())
if t >= self.rolloverAt:
return 1
#print "No need to rollover: %d, %d" % (t, self.rolloverAt)
return 0
但是像这样,日志创建了一个备份,并且被覆盖.好像我也必须重写方法doRollover()
,这不是那么容易.
还有其他想法如何创建一个记录器,该记录器会在一定时间后以及达到一定大小后翻转文件吗?
所以我对TimedRotatingFileHandler
做了一个小技巧,以便能够在时间和大小上都进行翻转.我必须修改__init__
,shouldRollover
,doRollover
和getFilesToDelete
(请参见下文).这就是结果,当我设置when ='M',interval = 2,backupCount = 20,maxBytes = 1048576时:
-rw-r--r-- 1 user group 185164 Jun 10 00:54 sumid.log
-rw-r--r-- 1 user group 1048462 Jun 10 00:48 sumid.log.2011-06-10_00-48.001
-rw-r--r-- 1 user group 1048464 Jun 10 00:48 sumid.log.2011-06-10_00-48.002
-rw-r--r-- 1 user group 1048533 Jun 10 00:49 sumid.log.2011-06-10_00-48.003
-rw-r--r-- 1 user group 1048544 Jun 10 00:50 sumid.log.2011-06-10_00-49.001
-rw-r--r-- 1 user group 574362 Jun 10 00:52 sumid.log.2011-06-10_00-50.001
您可以看到,前四个日志达到1MB大小后已被翻转,而最后一个翻转发生在两分钟后.到目前为止,我还没有测试删除旧日志文件,因此它可能无法正常工作.对于backupCount> = 1000,该代码当然不起作用.我在文件名的末尾仅添加了三个数字.
这是修改后的代码:
class EnhancedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0, maxBytes=0):
""" This is just a combination of TimedRotatingFileHandler and RotatingFileHandler (adds maxBytes to TimedRotatingFileHandler) """
logging.handlers.TimedRotatingFileHandler.__init__(self, filename, when, interval, backupCount, encoding, delay, utc)
self.maxBytes=maxBytes
def shouldRollover(self, record):
"""
Determine if rollover should occur.
Basically, see if the supplied record would cause the file to exceed
the size limit we have.
we are also comparing times
"""
if self.stream is None: # delay was set...
self.stream = self._open()
if self.maxBytes > 0: # are we rolling over?
msg = "%s\n" % self.format(record)
self.stream.seek(0, 2) #due to non-posix-compliant Windows feature
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
t = int(time.time())
if t >= self.rolloverAt:
return 1
#print "No need to rollover: %d, %d" % (t, self.rolloverAt)
return 0
def doRollover(self):
"""
do a rollover; in this case, a date/time stamp is appended to the filename
when the rollover happens. However, you want the file to be named for the
start of the interval, not the current time. If there is a backup count,
then we have to get a list of matching filenames, sort them and remove
the one with the oldest suffix.
"""
if self.stream:
self.stream.close()
# get the time that this sequence started at and make it a TimeTuple
currentTime = int(time.time())
dstNow = time.localtime(currentTime)[-1]
t = self.rolloverAt - self.interval
if self.utc:
timeTuple = time.gmtime(t)
else:
timeTuple = time.localtime(t)
dstThen = timeTuple[-1]
if dstNow != dstThen:
if dstNow:
addend = 3600
else:
addend = -3600
timeTuple = time.localtime(t + addend)
dfn = self.baseFilename + "." + time.strftime(self.suffix, timeTuple)
if self.backupCount > 0:
cnt=1
dfn2="%s.%03d"%(dfn,cnt)
while os.path.exists(dfn2):
dfn2="%s.%03d"%(dfn,cnt)
cnt+=1
os.rename(self.baseFilename, dfn2)
for s in self.getFilesToDelete():
os.remove(s)
else:
if os.path.exists(dfn):
os.remove(dfn)
os.rename(self.baseFilename, dfn)
#print "%s -> %s" % (self.baseFilename, dfn)
self.mode = 'w'
self.stream = self._open()
newRolloverAt = self.computeRollover(currentTime)
while newRolloverAt <= currentTime:
newRolloverAt = newRolloverAt + self.interval
#If DST changes and midnight or weekly rollover, adjust for this.
if (self.when == 'MIDNIGHT' or self.when.startswith('W')) and not self.utc:
dstAtRollover = time.localtime(newRolloverAt)[-1]
if dstNow != dstAtRollover:
if not dstNow: # DST kicks in before next rollover, so we need to deduct an hour
addend = -3600
else: # DST bows out before next rollover, so we need to add an hour
addend = 3600
newRolloverAt += addend
self.rolloverAt = newRolloverAt
def getFilesToDelete(self):
"""
Determine the files to delete when rolling over.
More specific than the earlier method, which just used glob.glob().
"""
dirName, baseName = os.path.split(self.baseFilename)
fileNames = os.listdir(dirName)
result = []
prefix = baseName + "."
plen = len(prefix)
for fileName in fileNames:
if fileName[:plen] == prefix:
suffix = fileName[plen:-4]
if self.extMatch.match(suffix):
result.append(os.path.join(dirName, fileName))
result.sort()
if len(result) < self.backupCount:
result = []
else:
result = result[:len(result) - self.backupCount]
return result
I do struggle with the logging a bit. I'd like to roll over the logs after certain period of time and also after reaching certain size.
Rollover after a period of time is made by TimedRotatingFileHandler
,and rollover after reaching certain log size is made by RotatingFileHandler
.
But the TimedRotatingFileHandler
doesn't have the attribute maxBytes
and the RotatingFileHandler
can not rotate after a certain period of time.I also tried to add both handlers to logger, but the result was doubled logging.
Do I miss something?
I also looked into source code of logging.handlers
. I tried to subclass TimedRotatingFileHandler
and override the method shouldRollover()
to create a class with capabilities of both:
class EnhancedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0, maxBytes=0):
""" This is just a combination of TimedRotatingFileHandler and RotatingFileHandler (adds maxBytes to TimedRotatingFileHandler) """
# super(self). #It's old style class, so super doesn't work.
logging.handlers.TimedRotatingFileHandler.__init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0)
self.maxBytes=maxBytes
def shouldRollover(self, record):
"""
Determine if rollover should occur.
Basically, see if the supplied record would cause the file to exceed
the size limit we have.
we are also comparing times
"""
if self.stream is None: # delay was set...
self.stream = self._open()
if self.maxBytes > 0: # are we rolling over?
msg = "%s\n" % self.format(record)
self.stream.seek(0, 2) #due to non-posix-compliant Windows feature
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
t = int(time.time())
if t >= self.rolloverAt:
return 1
#print "No need to rollover: %d, %d" % (t, self.rolloverAt)
return 0
But like this the log creates one backup and the gets overwritten. Seems like I have to override also method doRollover()
which is not so easy.
Any other idea how to create a logger which rolls the file over after certain time and also after certain size reached?
So I made a small hack to TimedRotatingFileHandler
to be able to do rollover after both, time and size. I had to modify __init__
, shouldRollover
, doRollover
and getFilesToDelete
(see below). This is the result, when I set up when='M', interval=2, backupCount=20, maxBytes=1048576:
-rw-r--r-- 1 user group 185164 Jun 10 00:54 sumid.log
-rw-r--r-- 1 user group 1048462 Jun 10 00:48 sumid.log.2011-06-10_00-48.001
-rw-r--r-- 1 user group 1048464 Jun 10 00:48 sumid.log.2011-06-10_00-48.002
-rw-r--r-- 1 user group 1048533 Jun 10 00:49 sumid.log.2011-06-10_00-48.003
-rw-r--r-- 1 user group 1048544 Jun 10 00:50 sumid.log.2011-06-10_00-49.001
-rw-r--r-- 1 user group 574362 Jun 10 00:52 sumid.log.2011-06-10_00-50.001
You can see that first four logs were rolled over after reaching size of 1MB, while the last rollover occurred after two minutes. So far I didn't test deleting of old log files, so it probably doesn't work.The code certainly will not work for backupCount>=1000. I append just three digits at the end of the file name.
This is the modified code:
class EnhancedRotatingFileHandler(logging.handlers.TimedRotatingFileHandler):
def __init__(self, filename, when='h', interval=1, backupCount=0, encoding=None, delay=0, utc=0, maxBytes=0):
""" This is just a combination of TimedRotatingFileHandler and RotatingFileHandler (adds maxBytes to TimedRotatingFileHandler) """
logging.handlers.TimedRotatingFileHandler.__init__(self, filename, when, interval, backupCount, encoding, delay, utc)
self.maxBytes=maxBytes
def shouldRollover(self, record):
"""
Determine if rollover should occur.
Basically, see if the supplied record would cause the file to exceed
the size limit we have.
we are also comparing times
"""
if self.stream is None: # delay was set...
self.stream = self._open()
if self.maxBytes > 0: # are we rolling over?
msg = "%s\n" % self.format(record)
self.stream.seek(0, 2) #due to non-posix-compliant Windows feature
if self.stream.tell() + len(msg) >= self.maxBytes:
return 1
t = int(time.time())
if t >= self.rolloverAt:
return 1
#print "No need to rollover: %d, %d" % (t, self.rolloverAt)
return 0
def doRollover(self):
"""
do a rollover; in this case, a date/time stamp is appended to the filename
when the rollover happens. However, you want the file to be named for the
start of the interval, not the current time. If there is a backup count,
then we have to get a list of matching filenames, sort them and remove
the one with the oldest suffix.
"""
if self.stream:
self.stream.close()
# get the time that this sequence started at and make it a TimeTuple
currentTime = int(time.time())
dstNow = time.localtime(currentTime)[-1]
t = self.rolloverAt - self.interval
if self.utc:
timeTuple = time.gmtime(t)
else:
timeTuple = time.localtime(t)
dstThen = timeTuple[-1]
if dstNow != dstThen:
if dstNow:
addend = 3600
else:
addend = -3600
timeTuple = time.localtime(t + addend)
dfn = self.baseFilename + "." + time.strftime(self.suffix, timeTuple)
if self.backupCount > 0:
cnt=1
dfn2="%s.%03d"%(dfn,cnt)
while os.path.exists(dfn2):
dfn2="%s.%03d"%(dfn,cnt)
cnt+=1
os.rename(self.baseFilename, dfn2)
for s in self.getFilesToDelete():
os.remove(s)
else:
if os.path.exists(dfn):
os.remove(dfn)
os.rename(self.baseFilename, dfn)
#print "%s -> %s" % (self.baseFilename, dfn)
self.mode = 'w'
self.stream = self._open()
newRolloverAt = self.computeRollover(currentTime)
while newRolloverAt <= currentTime:
newRolloverAt = newRolloverAt + self.interval
#If DST changes and midnight or weekly rollover, adjust for this.
if (self.when == 'MIDNIGHT' or self.when.startswith('W')) and not self.utc:
dstAtRollover = time.localtime(newRolloverAt)[-1]
if dstNow != dstAtRollover:
if not dstNow: # DST kicks in before next rollover, so we need to deduct an hour
addend = -3600
else: # DST bows out before next rollover, so we need to add an hour
addend = 3600
newRolloverAt += addend
self.rolloverAt = newRolloverAt
def getFilesToDelete(self):
"""
Determine the files to delete when rolling over.
More specific than the earlier method, which just used glob.glob().
"""
dirName, baseName = os.path.split(self.baseFilename)
fileNames = os.listdir(dirName)
result = []
prefix = baseName + "."
plen = len(prefix)
for fileName in fileNames:
if fileName[:plen] == prefix:
suffix = fileName[plen:-4]
if self.extMatch.match(suffix):
result.append(os.path.join(dirName, fileName))
result.sort()
if len(result) < self.backupCount:
result = []
else:
result = result[:len(result) - self.backupCount]
return result
这篇关于logging.handlers:如何在time或maxBytes之后滚动?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!