The memory management of building parallel tasks is not optimal
Python is quite memory hungry for large scans
All scanning tasks work on Lambda, I can't mix in bare metal or containers
Job scheduling is helpful but also very repetitive for each target. Would be better to have generic continuous scanning for all targets. This will help spread out load as well with my new randomization rust scanners
The database model has some limitations:
Nothing showing where certain results came from
Not possible to construct the mermaid graph representation I came up with
Too much things stacked inside the Domain object that are not 100% correct. For example, IP addresses should be their own entities, tcp/udp ports are related to an ip not a domain, the order of resolved IP addresses in the domain object is not static, making it seem like we have a lot of updates.
The task manager code is quite difficult to read and not confident it's robust enough.
Introducing new tasks is quite labor intensive
Tasks do not easily show the task template they belong to, making parsing log files more difficult
The statistics of all the scans happening are not easily accessible or useful
THE MAIN THING, it hasn't helped me a single time to find or get closer to any actual bug/bounty. I have learned a bunch of things though so that is something..