I've had all kind of log files of very big sizes multiple times lately. Now i am having a large mainlog from exim. I know there is a tool called exigrep to search in it, but my question is a bit broader than using exigrep for exim main logs. The answer ideally should help with all kind of large log file and describe a workflow.
What I, in general want is to strip down a log file, into a smaller one, only showing the data I am looking for at that moment.
For example, in an Exim maillog you can see which mails are sent, and if they are accepted for delivery or not. Some of that information is spread out on multiple lines.
So in general, this is what I want to do.
- Open a large log file and search for certain patterns.
- Extract those lines that match, ideally in a new file, and sometimes an x amount of lines above or below the match.
- Refine the file from step 2 further into a smaller file, just as long the filtered information is clear to understand.
What i tried is for example use PHPStorm to open my log file, select some repetitive pieces of text. Then I either do "Select all occurrences" or "Select next occurrences". This adds extra cursors in my file that i can use to select multiple lines. BUT that is only limited to 1000 cursors.
Could you give me some tips on how to do this kind of stuff? Its okay if i need to fallback to those oldschool linux tools like, grep, sed and awk. The only one i am pretty of with is grep. the other to I don't really understand yet.