The Again, we do the math and see that our storage requirement has grown to The amount of data that needs to be stored can increase rapidly, depending on the network bandwidth and the number of points in the network that are tapped. Extending the retention period beyond 30 days will likewise increase the amount of storage required.
For many, this serves as a good starting point that allows them to scale out their full packet capture deployments over time. Some packet capture solutions also allow customers to capture subsets of their network data to further optimize what data is collected and maximize the return on their packet capture investment. A successful packet capture system needs to capture at line rate, index and compress the data in real time and write everything to disk continuously while simultaneously managing all the storage and retrieving data as needed for forensics investigations.
When a security incident is detected, you need to quickly determine what happened, how it happened and what, if anything, was compromised. Getting back to the question of whether packet capture is worth the investment, ask yourself this: How much would you be willing to pay for those answers? Imagine an IT team that just endured a ransomware attack and all or most of their critical business data was encrypted and locked.
Assuming they have a backup strategy, they will need to examine packet data-based forensics from before and during the ransomware attack to know how far back to restore their backup from. Do they need to go back a day, three days, a week or more? Malware will often lie dormant on a network for a while to avoid detection or spread slowly before it locks down critical systems.
According to Mandiant , dwell time for malware overall and ransomware specifically has been decreasing over the last few years, but the median global ransomware dwell time is still five days — plenty of time for potential confusion. IT will want to restore the system to a malware-free state i. In other words, organizations need a backup of data-in-motion network traffic as well as a backup of data-at-rest business data to really cope with the ransomware.
Access to packet data allows IT and Security Operations SecOps teams to conduct detailed incident response and forensic investigation to determine exactly what the attack was, how it breached the security perimeter, and how it spread. This sort of forensic analysis is only possible with stored packet data, which creates a record of events leading up to and during the breach. SecOps team members or external consultants can comb through the data to find the original malware that caused the attack, determine how it got onto the network in the first place, map how it traversed the network and determine which systems and data were exposed.
At certain points in the day, like during rush hour or after lunch when all the employees in a large company are going back to their desks, there are too many cars on the road. Things get even worse when a four-lane highway narrows into a two-lane road, and a lot of cars are looking to merge at the exact same time. Highway traffic is a fact of life and so is packet loss. When network traffic hits maximum capacity, packets will have to wait to be delivered.
Unfortunately, packets are the first things to get left behind when a network is trying to catch up with traffic and the connection can only handle so much. Luckily, most software today will circle back for those discarded packets by automatically resending the data or slowing down transfer speeds to give each packet a chance to make it through.
Glitchy, old, or otherwise outdated hardware can significantly weaken your network. Firewalls, routers, and network switches all take up a considerable amount of power. Unchecked bugs in your system can disrupt network performance and prevent it from sufficiently carrying packets. Sometimes rebooting your hardware will solve this, but since bugs are often introduced during hardware updates, the whole thing will need to be patched. Simply put, this means your system is running at a higher capacity than it was designed to handle.
In fact, packets on overutilized devices sometimes make it to their destinations, but by then the network is too weak to process the packets and send them back out. Many devices have buffers in place to put packets in holding patterns until they can be sent out. However, these buffers can get filled up quickly and excess packets are still dropped.
We also cannot ignore the possibility of someone deliberately tampering with your network and causing packet loss. Packet drop attacks have become popular with cybercriminals in recent years. Essentially, a hacker gets into your router and tells it to drop packets. If you notice a sudden drop in packet success or a significant slowdown in network speed, you could be in the midst of an attack. Hackers execute a denial-of-service attack by flooding the network with too much traffic for the network to handle, and it crashes.
The attackers then take advantage of this vulnerability. Invest in a SIEM solution , create a disaster recovery plan, update your firewall and, as always, keep yourself up to date on the latest antivirus software. Such attacks are rare—there are more common causes out there for packet issues.
Many IT administrators cobble together a networking monitoring system out of different tools. Without a comprehensive, seamless network monitoring solution, opportunities to stop or prevent packet loss fall through the cracks.
Wondering how to reduce packet loss? Zero percent packet loss is unachievable because the things causing it, like network issues, too many users, or an overloaded system, are bound to pop up. Any solutions recommended here or elsewhere are ways to help fix the problem after the fact, not prevent them from occurring. The key to preventing or lessening the impact of packet loss is network visibility.
A problem you can see is a problem you can solve. The tools listed below, in addition to boasting features specific to packet loss, can be used to give you a more comprehensive view of your network.
Both tasks are accomplished with networking monitoring best practices. NPM is an excellent choice for admins who have to keep watch over a large systems environment—the hop-by-hop packet path maps are especially useful, as you can quickly see if the problem lies inside or outside the network, and the tool provides the info you need to start addressing the issue quickly.
This function highlights the problem links in red, making troubleshooting easy. Also, NetPath displays each router and switch in the network route as a node. If you hover over the node, it pulls up the latency and packet loss statistics. Along the same lines, the LUCID logical, usable, customizable, interactive, drill-down user interface in NPM gives you a complete summary of all network activity, device status, and alerts, so you can see how your system is doing without having to toggle between different screens.
Bonus: NPM is fully customizable. Being able to see everything is great, but at the same time, nobody wants to be bombarded with that much information all the time. The auto-discovery function in Network Performance Monitor also deserves a special mention.
After you set it up for the first time, it recurs automatically, so any changes made to the network will show up in the tool. It also compiles a list of all the network devices in your environment and creates a network map. Now you can combat packet loss before it even happens. QoS settings can help by diverting bandwidth to the applications that need it most, which helps, but you need a way to troubleshoot voice calls and have visibility into their performance metrics.
0コメント