I want to log using python’s logging module to a file on a network drive. My problem is that the logging fails at some random point giving me this error:
--- Logging error --- Traceback (most recent call last): File "c:programmeanaconda3liblogging__init__.py", line 1085, in emit self.flush() File "c:programmeanaconda3liblogging__init__.py", line 1065, in flush self.stream.flush() OSError: [Errno 22] Invalid argument Call stack: File "log_test.py", line 67, in <module> logger_root.error('FLUSH!!!'+str(i)) Message: 'Minute:120' Arguments: () --- Logging error --- Traceback (most recent call last): File "c:programmeanaconda3liblogging__init__.py", line 1085, in emit self.flush() File "c:programmeanaconda3liblogging__init__.py", line 1065, in flush self.stream.flush() OSError: [Errno 22] Invalid argument Call stack: File "log_test.py", line 67, in <module> logger_root.error('FLUSH!!!'+str(i)) Message: 'FLUSH!!!120' Arguments: ()
I am on a virtual machine with Windows 10 (Version 1909) and I am using Python 3.8.3 and logging 0.5.1.2. The script runs in an virtual environment on a network drive, where the log files are stored. I am writing a script that is automating some data quality control tasks and I am not 100% sure, where (network drive, local drive, etc.) the script will end up on, so it should be able to log in every possible situation. The error does not appear at the same position/line in the script but randomly. Sometimes the program (~120 minutes in total) finishes without the error appearing at all.
What I tried so far:
I believe that the logfile is closed at some point so that no new logging messages can be written to it. I wrote a simple script that basically only does logs to check if it is related to my original script or the logging process itself. Since the “only-logs-script” also fails randomly, when running on the network drive but not when it is running on my local drive, I assume that it is related to the connection to the network drive. I thought about having the whole logging stored in the memory and then written to the file but the MemoryHandler will also open the file at the beginning of the script and therefore fail at some point.
Here is my code for the “only-logs-script” (log_test.py
):
import logging import logging.handlers import os import datetime import time ################################################################## # setting up a logger to create a log file with information about this programm logfile_dir = 'logfiles_test' CHECK_FOLDER = os.path.isdir(logfile_dir) # if folder doesn't exist, create it if not CHECK_FOLDER: os.makedirs(logfile_dir) print("created folder : ", logfile_dir) log_path = '.\'+logfile_dir+'\' Current_Date = datetime.datetime.today().strftime ('%Y-%m-%d_') log_filename = log_path+Current_Date+'logtest.log' print(log_filename) # Create a root logger logger_root = logging.getLogger() # Create handlers f1_handler = logging.FileHandler(log_filename, mode='w+') f2_handler = logging.StreamHandler() f1_handler.setLevel(logging.INFO) f2_handler.setLevel(logging.INFO) # Create formatters and add it to handlers f1_format = logging.Formatter('%(asctime)s | %(name)s | %(levelname)s | %(message)s n') f2_format = logging.Formatter('%(asctime)s | %(name)s | %(levelname)s | %(message)s n') f1_handler.setFormatter(f1_format) f2_handler.setFormatter(f2_format) # create a memory handler memoryhandler = logging.handlers.MemoryHandler( capacity=1024*100, flushLevel=logging.ERROR, target=f1_handler, flushOnClose=True ) # Add handlers to the logger logger_root.addHandler(memoryhandler) logger_root.addHandler(f2_handler) logger_root.setLevel(logging.INFO) logger_root.info('Log-File initiated.') fname = log_path+'test.log' open(fname, mode='w+') for i in range(60*4): print(i) logger_root.warning('Minute:'+str(i)) print('Write access:', os.access(fname, os.W_OK)) if(i%10==0): logger_root.error('FLUSH!!!'+str(i)) time.sleep(60)
Is there something horribly wrong with my logging process or is it because of the network drive? And does anyone of you have any ideas on how to tackle this issue? Would storing the whole information in the memory and writing it to a file in the end solve the problem? How would I best achieve this? Another idea would be to log on the local drive and then automatically copy the file to the network drive, when the script is done. Any help is strongly appreciated as I have tried to identify and solve this problem for several days now.
Thank you!
Advertisement
Answer
Since this is not really going anywhere atm I will post what I did to “solve” my problem. It is not a satisfactory solution as it fails when the code fails but it is better than not logging at all. The solution is inspired by the answer to this question: log messages to an array/list with logging
So here is what I did:
import io ##################################### # first create an in-memory file-like object to save the logs to log_messages = io.StringIO() # create a stream handler that saves the log messages to that object s1_handler = logging.StreamHandler(log_messages) s1_handler.setLevel(logging.INFO) # create a file handler just in case f1_handler = logging.FileHandler(log_filename, mode='w+') f1_handler.setLevel(logging.INFO) # set the format for the log messages log_format = '%(asctime)s | %(name)s | %(levelname)s | %(message)s n' f1_format = logging.Formatter(log_format) s1_handler.setFormatter(f1_format) f1_format = logging.Formatter(log_format) # add the handler to the logger logger_root.addHandler(s1_handler) logger_root.addHandler(f1_handler) ##################################### # here would be the main code ... ##################################### # at the end of my code I added this to write the in-memory-message to the file contents = log_messages.getvalue() # opening a file in 'w' file = open(log_filename, 'w') # write log message to file file.write("{}n".format(contents)) # closing the file and the in-memory object file.close() log_messages.close()
Obviously this fails when the code fails but the code tries to catch most errors, so I hope it will work. I got rid of the Memory handler but kept a file handler so that in case of a real failure at least some of the logs are recorded until the file handler fails. It is far from ideal but it works for me atm. If you have some other suggestions/improvements I would be happy to hear them!