CSG Spring 2015 – Security 3.0: Card Access Security using Grouper

Charlie Kniefel – Duke

Legacy environment: Prior to 12/14 was using Blackboard Uptim to manage door access; independently managed buildings; no consistent rules for building access on campus; Would have to ask each building owner for access; deactivation of cards not as quick as desired.

Implementation of Blackboard Transact: Started in early 2014 with planning starting in 2013. Covers both financial systems and access control; access control services are separated from financial transactions as part of implementation; lots of cleanup/preparation prior to transition.

Management of access plans: seems to be a lot like groups; based on who you are, your role, what classes you take.

Community plans: groups created for faculty, staff, students; needed to agree on standard business hours.

Individual building plans: access for students can be based on major or classes; challenges – financial system based on who pays you, not where you work; local coordinators trained to use tools to manage membership; goal is to get to role-based access.

Future steps: Wrapping grouper seems to be a common trend: toolkits for instruction; research toolkits for research group services management; access control membership; share? Additional technology enablement: contactless but with legs? Apple’s plans?; Roles, roles, roles.

Built access for students to get the data about their card activity.

Lack of good space management data hinders usefulness of roles.

CSG Spring 2015 – Security 3.0: Learning to live with an advanced persistent threat.

We’re at Penn State University for the Spring CSG meeting. The first workshop is titled Security 3.0

John Denune – UCSD – Learning to live with an advanced persistent threat.

What is an APT? Not an opportunistic attack – they’re after something you have. Targeted, skilled, and won’t stop till they reach their goals. Can take years to break into your systems. Can be technical means (0 days, custom malware), or social engineering. Can be theft for financial information, corporate espionage, state sponsored.

APT Lifecycle: External recon (looking at projects, org charts, etc), initial compromise, then establish a foothold, escalate privileges until they get what they want.

Initial detection started in June 2012.

Tried to drop malware on a departmental machine – not all that unusual. Came in the same way the following night. Came in on separate VPN accounts to VPN concentrators, and logged in to servers with OU admin credentials. Over next several nights reset passwords, rebuilt machines, etc.

Lesson learned: Really pay attention to anti-virus alerts, but don’t (completely) rely on your AV product – only one caught this and it only caught one out of several instances.

Where possible, track IPs instead of blocking them.

Initial Recon was traced back in February 2012 – scoured departmental web servers. Initial compromise happened in April. Found a dozen compromised machines they didn’t know of.

Called in help: Make your local FBI agent your new best friend. They knew why hackers would be interested in the international studies department. Also were very helpful technically.

One piece of malware was a custom version of Gh0st RAT. Another technique was Dynamic DNS Beaconing. Talking to different servers every hour or day. Makes it difficult to track IPs. Had to turn up logging as high as they could bear, especially authentication, netflow (on VPN concentrators), and DNS. Found another dozen systems that had been compromised.

All attacks took place Sunday – Thursday between 6pm and 3am Pacific: 9-5 Monday-Friday in Beijing.

You don’t need to rely on a lot of malware when you’ve already got a long list of credentials. You don’t need to crack passwords when you can just pass a hash. Can get the hashes from compromising a client, and if it’s an admin can then get access to servers and domain controllers.

Mitigations: change passwords multiple times per day; fast track 2FA; Compartmentalize passwords; separate user and admin credentials; minimize lateral trust – host based rules to prevent system-system access; scan entire domain for scheduled tasks; rebuild domain controllers.

Emergency Action – September 2013

Hadn’t forced password change in a dozen years. Effectively and securely communicating a password change is hard. Now doing it on a yearly cycle.

Reengagement – July 2013

Hackers kept trying to get access with stored credentials. After a week of failure they disappeared for a while, and then tried passing old hashes for all upper level management. Failed at attempts so far.

Infrastructure changes: Yearly PW changes; monitoring the network for pass-the-hash (not easy because that’s the normal Microsoft way for getting access to file servers, so looking for hashes that don’t correspond to direct client logins); implement 2-factor for OU admins; additional “bastion hosts”, limit lateral access; more logging and splunk analysis; security clearances for some personnel so they can talk to the FBI; Windows 8.1 and Server 2012 R2 features: RDP use without putting the credentials on the remote computer, addition of a new Protected Users group whose credentials cannot be used in remote PtH attacks.

[ICPL 2008] Self-Snooping – monitoring your networks

H. Morrow Long is an Info security guy from Yale.

Have decided not to scan for sensitive data on the network, but do scan for computers looking for sensitive info.

Had two major data incidents.

Had a large federal contracts investigation, and one large data breach.

Now scan administrative desktops, and require all faculty and staff to scan data on their machines, including laptops. Using IdentityFinder on WIndows, and some open source stuff on MacOS and Linux. Have evaluated several enterprise products: Tablus, Vontu.

Spent first half of 2006 doing data breach planning, which led them to realize that they had to have a data classification program. They have an agreement with the Yale Police to report to them every stolen laptop – started to see more stolen laptops. In beginning of 2007 began a program to do PGP whole disk encryption. In July of 2007 two laptops stolen from Dean’s Office – they had backups, which they scanned for sensitive data (Cornell Spider, Texas SENF program, Va Tech’s

python program). They found 5,000 SSNs on each PC’s backup.

“The plan is fine until the shooting starts” – Patton.

Once you know what’s been lost, then you have to act on it. Criteria for scanning compromised computers – reasonable belief that data may have been exposed – evidence that somebody was on the computer for a length of time, or there’s evidence of data transfer, or if there’s belief that there may have been confidential data on the machine – don’t do scans for every time there’s a virus.

Yale complted an SSN elimination project in 2005 – so why were SSN’s on those stolen machines? Course and student lists in email and spreadsheets which were old and not needed. Discovered that almost everybody had at least one SSN on their machine – their own.

Thief stayed behind in office – stole two laptops. Police caught him the next night, but didn’t recover the laptops. Computers were likely stolen for quick sale, not data. Laptops had BIOS and OS passwords, and 1 had disk interlock password. But Connecticut law requires notification. Learned later that notification is really only required if there’s a name associated with the SSN.

Set up a call center for help, staffed by people in the Dean’s office. Crafted a communications plan, with several letters targeted at different people. Immediately encrypted all the laptops in the Dean’s Office iwth PGP Whole Disk Encryption.

One alum claimed ID theft and contacted the AG and the media. THe AG wanted to know why Yale did not offer credit protection plan. Hired ID Analytics to check the SSN #s for probability of compromise.

They created tools for scanning (Windows only at first), and got the General Counsel to send out letters to specified staff lettint them know that their machines were going to be scanned. Getting users to remediate data is the hard part – confusion, false positives, etc.

Policy for files with SSNs: 1. Remove 2. Move 3. De-identify 4. Encrypt

They use their training management system to record whether people have completed and remediated from their scans.

David Escalanted – Director of Security, Boston College

March 2005 – major data breach that required 100k + letters to alumni.

Realized that users don’t seem to mind people looking at their email for viruses and spam, so should be able to scan for PII. They also started collecting netflow data and Snort IDS. PII finder (Fidelis) “catches stupid people”, not hackers. They didn’t notify the community that they’re running these tools – if it’s legit to look for bad stuff coming in, they figure it’s legit to look for it going out. What happens to offenders? For PII, a VP or Dean is frequently involved.

When the White House invited the hockey team to visit, they wanted a list of all the visitors with their SSN #s. Emailed. They caught that going over the wire.

Encryption kills scanning on the wire.

Shirley Payne is the Directory of IT Security and Policy at the University of Virginia

Considerations for general policy decisions: Consistency with existing policies and norms (especially the physical world ones); compliance with or in consideration of laws.

UVa is sort of the opposite of BU: Not generally monitoring content, blocking websites, or scanning devices without permission. There are, of course some exceptions, like traffic monitoring for virus/worms signatures, etc.

[CSG Winter 08] Minimizing Use, Misuse, and Risk of Inadvertent Disclosure of SSN and Other Sensitive Data at Institutions of Higher Education

The last morning of CSG kicks off with a policy discussion on minimizing use of SSN and other sensitive data.

Steve Shuster, Cornell

Started data security policy work two years ago. Has had a long-standing data stewardship program on campus, aligned to Vice President offices. There were gaps – VPs don’t think about security as rules change. Policies and practices haven’t always been consistent. Started Data Incident Response Team (DIRT) – determines need to notify, how much analysis is enough, etc. VP of IT, Policy Office, Audit, Counsel, etc. Were taking about one incident per month to that group where sensitive data were involved.

Stepped back to think about data exposure – three categories: public, restricted, confidential. “restricted” is the default – allows the stewards to just worry about the extremities. Defined specific security requirements for the three classifications. IT security council – lead security person from each of the units, meets monthly. Established strong exception process – first thing you hear when talking about requirements is why people can’t conform. Have mechanism to update requirements continuously.

Policy has highlighted some things: Missing some data stewards, pieces of data that run across the data stewards, eg SSN. Looking at having a PII Officer that would be responsible for that kind of data. Finding the data is hard. Created a Cornell Spider application that can crawl a computer to look for confidential data. 50-60% of computers on campus have some confidential data on them.

Randy Marchany, Virginia Tech

Their needs: Stay out of the press; stay out fo the courts; preserve data integrity; respect the privacy of students and employees.

privacyrights.org has a good chronology of data breaches.

Steps for managing sensitive data:

#1 – Do what you can do when you can do it.

Pre 2003 –

Building Blocks – a one page acceptable use policy; Data classification

Tools – SSL

Education – awareness seminars

Compliance – HR disciplinary action

#2 Create a framework for doing it – an IT Security Task Force – has lots of committees across the entire scope of the central IT division.

#4 (what happened to 3?) – Don’t think you’re done.

Built tools (including use of Cornell’s spider), encryption

Education – awareness sessions, faculty institute

Compliance – IT security reviews of departments, Audit

A complete solution is not needed to get something done.

Everyone has a role

Pulling all the pieces together is the challenge, and making sure it works

Cam Beasley, Texas

Formed compliance group with admin units in 2002.

Had a significant SSN incident in 2003, so got really serious.

2006 had another incident – turned out that they hadn’t involved very many academic representatives in their work.

Since that have implemented formal policies, how systems are to be managed and how apps are to be developed. The two major incidents were insecure apps.

Have developed data stewardship program.

By 2005-2006 had shut off admin sensitive info flows. But still had problems out in the units. Developed a point ‘n’ click sensitive number finder – built in Java, uses bit-mask pattern matching (faster than regexp). Applied it in client and also for open shares over SMB or NFS. Also worked with Sourcefire (their IDS vendor) to build this algorithm in as a preprocessor (also works with Snort).

Developed another tool (issora?) – federated risk assessment tool. Applied data classification tool to it, were able to classify data on almost 48k machines. Now have faculty members who speak the same language (eg. know what category 1 data means).

Klara Jelinkova, Duke

A lot of what they’re trying to do is about balance – divided problem: Duke Medicine security handles HIPAA data and policy; University handles FERPA and DMCA. That’s been very effective. As the two groups move closer together (joint ID mgmt, networking, etc), there’s more need for a higher-level policy group, which they’re exploring. As a technologist she’s been skeptical of policy and whether it works.

Longstanding policy – unique ID should be substituted for SSN. Talman Trask (exec VP) sent letter to all the deans – storage of SSN requires his approval. Had a breach in a departmentl web server – found out it had an app for brochure requests that asked for SSN to do later correlation. Who has the responsibility policies aren’t followed? Is it the CIO?

Lots of discussion – one question that came up is what’s the sensitivity of passport numbers? Wasn’t in any of the policies.