Coming into this class, I already had two full internships and a couple tours/shadows under my belt. This put me into an interesting place. I would arrive in class and relate to the corporate stories and examples. From an executive not taking no, to not having enough time or money to properly do security, I found myself nodding along and thinking of examples from my history. Somewhat luckily, I have only been in a temporary position and always at the lowest branch. This put me in a position where I didn’t have to (or get to) make ethical decisions. Also, with it just being a temporary position, I was okay only pushing back if I truly felt that something was going wrong, but I generally did as I was told. There was one case in my high school internship where I did speak up about password policies. We changed passwords once a month, and I got some inclinations that people were getting lazy with their passwords. I brought this up to the CEO and my manager. They heard my case, and looked at one of the passwords a previous intern gave us for access to his project (WhiteHeadphones2021! Or something like that), and they changed the policy. Originally, I was interested more in development, but experiences like that pushed me more towards security and its importance.
With those experiences, I picked up a couple specific pieces of advice. The first one that really helped me with my high school internship was to say, “yes, but,” instead of no. We also talked about this in class, and the in-class stories just reinforced this idea further. The CEO of the company I worked at was very eclectic, had lots of ideas, and didn’t have a lot of people. While I was there, I was told to automate webhooks into our ticket flow, configure and setup computers we sent to customers, manage the AD, research better registry configurations, automate computer deployment with Azure, audit Azure IAM and expenditure, make a crypto trading platform in two weeks for a client (I quashed that), and create an internal HR web application based on a template (just me, and I has two months left). This made me relate heavily to the developer case we discussed in class. In that case, there were devs that kept working on new features and wouldn’t clear the security backlog. If they had a CEO like the one I was under, it would make sense. Always focused on the new thing and MVP. I’ve definitely learned that setting boundaries and not always being a yes-man is a good idea if you want to be happy in the long run.
One of the most useful things I try to keep in mind for ethical decision making is to ask first and offer later. It’s very important to get all of the information you can before you make an important decision. We talked about this in class with a scenario where you’re pushing for a security change, but the c-suites are saying to drop it. It could be that there is information beyond your position that prevents them from pushing out this security change. If you ended up forcing the issue or leaking the flaw, you could end up looking stupid or even hurting people. I would consider this an unethical decision, and there are a couple things we talked about in class and readings that back this up.
Between contractarian, virtue, Kantian, and consequentialist ethics, 3 of these seem to back up my current feelings. Starting with the one that doesn’t agree, virtue ethics. In virtue ethics, a decision is ethical as long as it follows an ethical virtue. According to the readings, “A virtue is an excellent trait of character.” In this scenario our choice would be ethical because it follows the virtues of truth, a drive to help and protect, and probably justice.
For the ones that agree with my gut, contractarian and consequentialist are the easiest ones to follow. From the reading, “For the contractarian, all moral norms are supposed to be the result of agreement by rational agents,” and in this case there probably was an agreement. In this case, the contract is a pretty literal NDA most likely. There would also be somewhat of a sub-contract to your manager to follow their direction. Leaking the security flaws would break both of these contracts and therefore be unethical. Onto the consequentialist/utilitarian perspective, these are all about the choice, the outcome, and how moral those are objectively. If there is a big reason the company cannot fix that flaw, now there is a possible way to get into the system and maybe nothing that can be done. This would cause a massive problem. Of course, it could also just put pressure on the executive team to fix the issue, but not without unnecessary harm. This would lead me to believe that a consequentialist/utilitarian wouldn’t find this choice ethical.
Now onto one of the hardest theories to wrap my head around, Kantian. The most well known part of Kant’s philosophy is that we ought to, “act only in accordance with that maxim through which you can at the same time will that it become a universal law.” For this case, the maxim, as I see it, would be to leak any security vulnerabilities that that you want to fix and aren’t allowed to. The impact on infrastructure costs is the first thing that comes to mind. Putting everything on hold right away to fix any security vulnerabilities doesn’t lend itself well to business flow, and it would really upset development cycles. Aside from that, while it would be nice to fix every single vulnerability, does the really obscure exploit that can only get computer names on the network really need fixed right now? This is why I think that there is hardly any universal maxim that can be applied in the security world, and especially this maxim.
Keeping this in mind, there is a topic we covered in class that I really haven’t had too much previous experience with. That is the topic of risk. I did get a little brush with governance, risk, and compliance(GRC) at my previous internship: Collins Aerospace, but I mostly got experience with the governance and compliance part of that trio. Taking risk into perspective, it becomes a bit more gray and dependent on the security problems we found, how easy it is to exploit, and how useful it is to the attacker. A risk assessment will essentially look at each vulnerability found, and then assign some risk value to it according to its likelihood to be exploited and impact on company systems. A crucial aspect we talked about in regards to this in class is the somewhat subjective nature of this concrete judgment. One of the readings we did have covered the NIST Guide for Conducting Risk Assessments, but there is still wiggle room for interpretation and debate among security professionals. It’s just really hard to put numbers to a lot of these incidents when they often involve personal biases on the usefulness of different assets or the attacker’s mindset.
Applying this back to our scenario has an interesting effect. If the issue you wanted to leak was very low risk, is it really worth leaking? At what point would you just be petty, and there’s probably much bigger issues at hand. Looking at an extremely high risk vulnerability, there could be a lot of damage caused by leaking it. Leaking a high risk vulnerability is also pretty counterintuitive in the long run because the damage caused by leaking it is exactly the kind of thing you’re trying to avoid by leaking it in the first place. This makes it very difficult to find a scenario where leaking a vulnerability won’t make too much of an impact, but also isn’t just petty.
Looking more at day-to-day ethical decision-making, it was interesting to see the adage we talked about in class of how something can be good, cheap, or fast applied to almost every ethical discussion in class. Whether it was trying to find a video platform for school ASAP, having all your dev time focused on bugs and/or new features for a minimum viable product, or the department not having enough money and just trying to get the big problems out, it really makes you think about what parts of “good” you’ll sacrifice for cheap and/or fast. There is also always the conundrum of things like zero days or undiscovered vulnerabilities in massive codebases (of course these may seem like the same thing in the eyes of something like the CVE catalogs, but I would argue that a zero day is closer to something outside of the ATT&CK and CAPEC™ frameworks we talked about in class). Even massive and extremely secure organizations like the NSA can be hacked, and that really just goes to show that there’s always something we can do. From this, I’ve learned: while it feels amazing to go and fix every single bug and completely secure your network, that’s not exactly realistic. From this, I’ve really learned to live and let go while minimizing risk. Assuming compromise at every level and figuring out how to mitigate the impact is sometimes more important than patching holes themselves.
So far I’ve focused a lot on applying security around different corporate environments, but a job in security isn’t just about patching holes and making a network more secure. Especially in a smaller organization, the security person may also be the general IT person. Another scenario we touched on in class was about HR asking us to look at computer activity of another employee. Especially if you have a relationship with the other employee, it can be really tough. The best option would be to have a third party look at the suspicious employee’s activity, but that can be expensive. If you are just the general IT person, that probably means the company is pretty small and may not have that kind of money. If you’re stuck in that situation, it’ll suck, but you may have to do it. I actually saw this happen at my first major internship. My manager had to look at the internet activity of another employee in the call center who was day trading between calls. This just means that I know how gross it feels from experience.
I’ve also learned that legal can be your friend. If you’re not so sure about doing something, checking with legal can be good to ease your conscience because you’re kind of forced to follow it. Looking back at the previous case from class, if the request from HR feels a little out of place, it may be good to just check in with legal. If it happens that your company doesn’t have a legal department, first, you should probably get one, second, it’s probably good to bring it up and double check that you would be allowed to get the information they wanted.
This can be a very difficult thing to do at first. I often find pushing back to be quite difficult, so finding ways to redirect have definitely helped me. As I talked about before, saying, “yes, but,” can really help to make it seem more collaborative. Just saying no can be hard for both involved and really leaves nowhere to go. Offering another idea can help, but providing possible problems that lead the other person to your idea makes it way more impactful. Figuring something out yourself makes it stick way more in your head than just being told a problem and solution. Another aspect of letting the other person come up with your idea is that it’s less combative. If I rejected the other person’s opinion and substituted my own, the other person would most likely try to fight for their own idea and attack mine. This is probably the most useful and applicable soft skill throughout my time in security. There are so many competing ideas that are totally reasonable directions to bring the company in, and this can help resolve that conflict.
Overall, I have found this class very useful for preparing me in the workforce. I got to learn a little bit about ethical theories, but the examples in class were definitely more help with applying my own beliefs. Aside from that, it was also useful to get some cyber history lessons, so that it is easier for me to follow along in conversations with other cyber people. Learning more about Enron, Assange, Snowdin, and even the more recent Target breach will really help me to stay on top of conversations in the industry. It was also really useful to look at and learn from their mistakes.
Another part of the class that will definitely follow me into the future is all of the different GRC and development talks. Thinking about keeping software ethical as well as secure from the ethical OS reading will most likely be something I look back at in the future. On the security side, thinking back to all of the development scenarios may help me get out of tough situations when the all-too-common occurrence of too little resources, human and otherwise, pops up. On the GRC side of things, making sure to spell HIPAA (keeping in mind it’s not hippo helped) right won’t get me laughed at, but knowing about HIPAA, PCI, and NIST will assuredly come into play later in my life. I know I’ve already worked a little with NIST while we were auditing our security practices in App Sec at one of my internships, but really digging into the relevant sections helped. I have also seen that the ones mentioned in class may be the most common, but they are no means the only ones. I’ve also ran into other compliance forms from and industry (missile control systems) to a company (Wells Fargo). While this isn’t my favorite part of the industry by any means, I see the need and respect that it won’t really stop.
Thank you for giving this class and preparing everyone for the day-to-day problems in the future.