How do you use Sentry?
Sentry Saas (sentry.io)
Version
1.5.10
I've confirmed this happens with both Python 3.10.0 and 3.10.4.
Steps to Reproduce
- Raise an exception
- Catch the exception
- Capture with
sentry_sdk.capture_exception()
- Do nothing for a while (e.g. sleep)
To test, I used the following sample that prints to console when an exception is created or destroyed:
import os
import time
import sentry_sdk
class SomeException(Exception):
count = 1
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.instance = SomeException.count
SomeException.count += 1
print(f'SomeException created: {self.instance}.')
def __del__(self):
print(f'SomeException destroyed: {self.instance}')
def test():
try:
raise SomeException
except Exception as e:
sentry_sdk.capture_exception(e)
sentry_sdk.init(
dsn=os.environ['SENTRY_DSN'],
environment='development',
)
for i in range(10):
test()
# sentry_sdk.flush()
print('Waiting...')
time.sleep(10)
print('Done.')
Expected Result
Exception object should be dropped and destroyed as soon as Sentry flushes events.
Actual Result
Running the sample as-is produces the following output:
SomeException created: 1.
SomeException created: 2.
SomeException created: 3.
SomeException destroyed: 2
SomeException created: 4.
SomeException created: 5.
SomeException created: 6.
SomeException created: 7.
SomeException created: 8.
SomeException created: 9.
SomeException created: 10.
Waiting...
Done.
SomeException destroyed: 1
SomeException destroyed: 4
SomeException destroyed: 5
SomeException destroyed: 6
SomeException destroyed: 7
SomeException destroyed: 8
SomeException destroyed: 9
SomeException destroyed: 3
If I uncomment the manual call to flush(), exceptions #4-9 are dropped before the sleep() call instead of after. 1 and 3 are still not dropped until the program exits.
In no case does exception #10 (the most recent one raised) print its __del__ message. Not sure why that is.
I understand that Sentry maintains a queue internally, so I expect these objects to live until the queue is cleared out. What I find surprising is that a) the first exception (and apparently the third as well?) is not dropped until the program exits, and b) if there are outstanding events in the queue, they aren't automatically flushed after some period of time. I would expect a periodic flush (say, every 5-20 seconds) to catch situations where just a single error is raised and not repeated. However I've tried sleeping for up to 5 minutes and the queue does not seem to ever flush.
This behavior (holding onto raised exceptions) is a problem for us because some of the locals referenced by the traceback have resources that need to be released.
How do you use Sentry?
Sentry Saas (sentry.io)
Version
1.5.10
I've confirmed this happens with both Python 3.10.0 and 3.10.4.
Steps to Reproduce
sentry_sdk.capture_exception()To test, I used the following sample that prints to console when an exception is created or destroyed:
Expected Result
Exception object should be dropped and destroyed as soon as Sentry flushes events.
Actual Result
Running the sample as-is produces the following output:
If I uncomment the manual call to
flush(), exceptions #4-9 are dropped before thesleep()call instead of after. 1 and 3 are still not dropped until the program exits.In no case does exception #10 (the most recent one raised) print its
__del__message. Not sure why that is.I understand that Sentry maintains a queue internally, so I expect these objects to live until the queue is cleared out. What I find surprising is that a) the first exception (and apparently the third as well?) is not dropped until the program exits, and b) if there are outstanding events in the queue, they aren't automatically flushed after some period of time. I would expect a periodic flush (say, every 5-20 seconds) to catch situations where just a single error is raised and not repeated. However I've tried sleeping for up to 5 minutes and the queue does not seem to ever flush.
This behavior (holding onto raised exceptions) is a problem for us because some of the locals referenced by the traceback have resources that need to be released.