In the past days, there were a lot of improvements in the Linux Smart Enumeration script. Some of them were related to stability but most of them addressed some performance issues in certain scenarios.
Thanks to an issue opened last week (thanks @exploide) I started to investigate a serious performance problem that occurs when the user executing the script has thousands of writable files outside its home directory. In this case, LSE could take hours to finish do to how several checks were performed.
After some testing, I found some good alternatives to write the problematic tests and I ended up shorting the time (in my environment) from hours to 8 minutes.
In addition, I implemented a new option that allows the user to exclude paths from the tests. So if you know that the machine you are testing has a ton of files under, lets say, /mnt/backups, now you can use -e /mnt/backups so tests will skip that path. You can also specify several paths in a comma separated list: -e '/mnt/backups,/var/lib/mysql'.
After using this option in my environment, I improved the timing from the past 8 minutes to just 2 minutes while excluding a path with thousands of writable files for the user.
Of course, using the -e option will give less complete results, so use it with care.
Several tests use the files found in test fst000 (writable files outside home directory) to find interesting files. For that they iterate over these writable files.
The problem was that in each iteration, the tests made some basic checks like, before doing anything, confirm that the file exists with [ -f "$w_file" ]. This apparently innocent call worked just fine in systems with a few hundreds of writable files found, but when several thousands are found, then this call hits really hard, performance wise.
The solution was to work only with the path strings until we have a positive match for the specific test and only then check if the file exists. Now it seems pretty obvious :).