Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been fighting elasticsearch to parse some netflow traffic. Found a great tool for logging the traffic and it even set up an elasticsearch instance and kibana for me, pretty much plug-and-play.

It logs lots of data, but I'm only interested in subnet level at this stage. I'm not interested in direction either.

A (multiple) text files with say a csv format (other fields are fine, cut -f will strip those out)

datetime,collector,srcip,dstip,bytes

I could easilly pipe into sed to strip the 4th octet, but in reality I'd probably parse with perl. Would take about 10 minutes and output a nice simple spreadsheet showing me which subnets are busiest, which times are busy etc, and apply accounting on a subnet basis (or with a bit more perl trivally assign different IPs to different accounting buckets)

I can only assume elk is a completely different way of thinking. Collegues think that grep is "hard", but click-click-click in kibana is easy.

However, I know I'm a grumpy old fart. I find lots of new ways to reinvent the wheel tiring and pointless, but it feels like shouting into the wind. Recently things that have made me simply sign include ubuntu switching from /etc/network/interfaces to netplan, or from debian-installer to subiquity. Moving from initd to systemd.

I'm sure there are good reasons for changing all of these, but for my use cases it just increases the workload.



Can you tell me which tool you consider for getting netflows? I am trying building something that is really close to what you describe but fail to find the time to do so.


Using https://gitlab.com/thart/flowanalyzer at the moment

However thinking it would be far easier to write something from scratch




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: