[CSG Winter 2011] Unified Communications Workshop – part 1.

Mike Pickett (Brown)

What is UC?

Multiple devices, platforms, time-span, products
Will affect workflow, ability to integrate with lots of devices
“UC is integration of real-time and non-real-time devices across platforms”
Brown engaged – WTC Consulting – Phil Beilman

Why care? Allows business process integration, to simplify and integrate all forms of communications to optimize business processes, reduce the response time, manage flows.

Survey Results – 2 campuses are on the way to eliminating desktop phones.
Illinois – have about 18 months to go.

Bill Clebsch – at Stanford they’re finding that people think they want the soft phone, but after two or three days of using it they find they don’t.

Iowa – deployed OCS for presence and IM across campus, and people like it. 150 people on OCS voice, paired with unified messaging. UM has been the killer app.

Greg J – 4 dimensions to communications we need to unify – voice, text (becoming one), documents, video. Many-to-many video is a big unsolved problem. We’re not going to control any of these, so moving towards understanding how to move forward with these in ways that allow people to collaborate is important.

Shel asks “can we embrace mediocrity at the institutional level, because the innovation is going to happen around us?”

Tom Barton – thinking about the global use as we extend our campuses is important.

Klara – How far do we go in supporting mobility in the hospitals?

Jim Phelps – thinking about how we migrate the store of rich streams as systems transition is important.

Ken Klingenstein – there’s a level of indirection we can provide in this space, and that is our business.

Two Expert Views:

Vern Elliot – Gartner
– cellular providers don’t take direction from universities, they take it from 16 year olds
– it’s all about the network
– Big driver – things are moving to commodity hw, TCIP-IP
– h.323 is becoming dominant
– communications are becoming integrated with apps
– sonsumerization
– on demand, cloud-based
– desk phone will have a diminishing role for at least 10 years.
– don’t get tied into a single vendor – not a good time to make a big bet if you can avoid it
– need a vision / strategy to resolve organizational issues over 3-5 years.
– cell phones are leading the convergence
– Google doesn’t have an enterprise approach yet
– MS Lync option is getting pretty impressive

WTC – Phillip Beitleman
– Reinvest in wire as you adopt a wireless strategy
– Harden the entire network – most eggs will be in this basket
– carrier neutral distributed antenna systems
– figure out actual costs across all IT services so funding can be mapped
– put together formal, structured plans across technology map and across multiple years – identify future funding strategies
– take longer planning cycles – 10 years for infrastructure
– don’t throw things away
UC doesn’t usually end up saving money in the near term because of complexity.
– rate models need to evolve to include telephony, network, and IT services
– WiMax as lost the battle – LTE will win

Directories are important.

Charlie – we only need phone numbers because of the legacy systems. If we all had SIP systems we’d use our network IDs.

Klara – voice is an immediate mode of communication (just one step down from video), and there will always be a role for it. Different population segments communicate differently, and we will have to support all of them.

Elazar – let’s move the risk of technology changes from us to the carriers.

Shel – if we endorse a solution, then we need to be the advocate for our users with that service.

Andy – people want a number as an enterprise identity. The carriers have ways to have multiple numbers on a single device – UMich is doing this in a pilot, where they put a UMich number on people’s individual cell phones.

Bill – Want some people to reach you by your institutional identity. We have three separate identities now – a network ID, an email address, and a phone number. Can we go to one? Security of research information is very important – how do we protect that? Only we can answer those questions.

Tracy – some of the reasons people don’t want to give up their devices aren’t yet supported in the new models. Where will people forgive convenience for mobility, and where not? When we think about remote locations, we need higher fidelity and bandwidth – will we find mobile ways for that?

Ken – metadata is (as always) important – where’s the metadata that says what was in that videoconference? Where’s integrated search?

Shel – we’re in a purgatory period – most voice mail just says “hi it’s me – call me.”

Advertisement

[CSG Winter 2011] InCommon Silver

InCommon Silver is an Identity Assurance Program. Requires a set of infrastructure requirements around eight assessment areas. Three general categories of requirements:
1. Documentation of policies and procedures and standard operating practices
2. Strength of authentication and authorization
3.?

CIC CIOs provide strong exec. sponsorship.
The CIC universities will implement Silver to support LoA 2 by Fall 2011

CIC co-leads – Renee Shuey (Penn State), Tom Barton (Chicago).

Michigan State – goals were to enable collaboration, so needed to build trust with external partners and can facilitate access to services. Initial challenges revolved around interpreting the Bronze/Silver Identity Assurance Profile (IAP) – luckily friends in CIC helped decode it – it’s got very complex ideas. Password policies didn’t map – were too simple. Sorely lacking: documentation, policy. Who to provide this for? Try to pare down scope. What’s the killer app? Has yet to rear its head – most likely to come out of NIH. Argument has been let’s try to be proactive and be prepared before it becomes a requirement.

Approach – work with other institutions, partner with campus stakeholders, identify a subset of users (likely research faculty), leverage ID office (verification process, credentialing). Investigating second credential (certs) through iClass ID Cards – might do that rather than strengthen passwords on first credential.

Mary Dunker – VA Tech

REwind to CSG, Jan 2010
– Developing levels of assurance for personal digital IDs at Tech.
– Developing method for determining LofA.
– Developing tech for authenticating at LofA
– Aware that InCommon Silver was “out there”, but was going down road towards NIST certification.

Now
– Established standard for personal digital identity levels of assurance
– CAS recognizes LoA of authentication credential
– CAS front-ends SHibboleth
– ox-officio member of CIC Sliver Project planning group.

Where they’re going
– achieve InCommon Silver with personal digital certs on a usb token. Later possibilities – VASCO digipass one-time password devices. Soft certs (require infrastructure changes, developments of new UI).

Remaining tasks – Wait for Silver to be finalized, ensure compliance with silver – may require chante to record (and encrypt) DL or passport number. Ensure that CS checks revocation list for certs. Reuest audit. Apply for silver.

Iowa (Chris Pruess)

Silver thinking – Project doesn’t stand in isolation. Identity service served central academic space, but not hospital. Brought hospital into space starting in 2000. Current Authentication Focus – Active Directory Assessment – Can it provide required level of authentication strength to meet Silver? Have strong Project Mgmt discipline in IT org. Leveraging other projects – campus ID card (id proofing improvements – brought hospital badging requirement in also), revision of enterprise password policy (established framework for multiple strength passwords).

Tom notes that while the initial use cases for Silver are for smaller specialized populations (NIH apps, TeraGrid) we should be ready for the larger cases coming – e.g. TIAA/CREF, financial aid, etc. Chicago wants to get to Silver using existing user name/password credentials. Requires a bunch of work on things like how passwords are stored and managed.

RL Bob Morgan – Refining Silver.
We were working on feds E-Auth requirements, but then they phased that out and started ICAM.

Need to change based on feedback – it it’s that hard for Va Tech, that’s a problem. It has to work for everyone. Needs to be as simple to understand and implement as it can be while still dealing with federal requirements. People read every word. Watch out for “must”. Remove most requirements not referenced by ICAM TFPAP. Exception is some other potential Silver consumers such as TeraGrid/IGTF.

Business, Policy, and Operational Factors is the primary section where elements have been removed. Audits and Auditors – Recognize need for shared risk between InCommon and campuses, propose an Assurance Review Board, Role of Auditors: confirm management assertions, not guarantee IA conformance. Reduce number and fequency of audits. Tom notes that they’re working with ACUA (the association of college and university auditors) towards guidelines on how to audit identity management. Matt notes that working with the auditor before setting down this path is a very good idea.

IAM functional model – flesh out enterprise scenario, vs dedicated IdP – et multiple apps, RAs, password stores. Streamline terms. Define terms in context.

Registration and proofing – clarify some concepts – existing relationship, identity information (e.g. meaning of “address of record”).

Kevin Morooney – It’s important – You should care. Two perspectives

Campus CIO –

4 basic principles/observations
– We want more. always
– They said it couldn’t be done, but we did it
– If your best friend jumped off a bridge….
– We are playing our part in an epic battle.

The importance of Trust increases with transactional importance – from affinity cards, through credit cards, driver licenses, passports, social security card, birth certificate.

Principle: OVer time we want to do higher stakes transactions online. True within campus, and off campus, between campuses, etc. Klara’s point – we’ve been doing it all along for quite some time. The value of doing silver is already paying off.

Principle: eduPerson, authentication, authorization. Each of these was a hard effort, but we’ve made a lot of progress. Every step along the way there were naysayers – they weren’t right. But they could have been. NIH is taking this trust fabric idea very seriously.

Principle: Others with whom we do business are heading in the same directions, for incredibly similar reasons.

An epic battle is being waged – Popularity vs. Truth. Our institutions are largely in the business of getting it right – what we’re constantly up again is popular knowledge that hasn’t been vetted. Getting trust right is a part of truth. Changing scholarship models will require making strong assertions about our people.

A late addition – big companies have contacted Kevin about learning how we’ve done identity management – because we’ve been dealing with the chaos that they’re just beginning to experience.

InCommon guy –

Principle – it’s about community. InCommon maturation – size and shape of the org are changing. Lot of dialog about wanting InCommon to play more of a role – community asking it to do things.

Principle- Silver is one of many things that supports the theme of the future – ever increasing trust.

InCommon’s success is dependent on what we do on our campuses.

[CSG Winter 2011] IT Alignment, efficiency, strategy and governance, part 1

Jim Phelps is setting the stage – what does it mean to be a mature enterprise? 5 stages – Ad Hoc, Basic, Standardized, Managed, Adaptive – lower levels driven by technologies, upper levels driven by business strategiies. Adaptive is designed to pursue change and adapt quickly. Higher Ed governance structures are designed to resist change – to keep processes going through turmoil.

Why change? Not just because it’s fashionable – a lot of compelling drivers, like cost differential between cloud and on-premises services. Huge shift in how business is being conducted globally – See the article in Atlantic on The Rise of the New Global Elite, and the article in the Chronicle on European university mergers. Higher Ed has a terrible time making decisions.

Bernie takes over – asking Why Alignment so Important? Typically only about 1/3 of the total institutional IT spending is in the central IT organization. As we move into challenging days, the reaction may be to lower central administrative spending, but that may not help with IT. We need to help our institutions understand how IT works in the institution and how to rationalize.

Role of Governance – to understand leadership role in facilitation of conversations. Understand what customers are looking for, and to help lead and socialize directions. Strategically choosing governance groups is critical. Choose who will be part of which groups and what the roles are in continuous technology conversations. Often groups like to think they get to tell the central IT shop what to do, but we need to help them understand their role within the governance continuum. Looking for a shared set of strategies to move forward.

IT Strategic Planning Goal – “Identify and invest in technology projects that are transformative and provide competitive advantage…” Terry asks who is the competition? Competing with other institutions, as measured by rankings, research dollars, – but what happens when everyone’s strategy is to be in the top 3? Tracy notes that the differences are how we translate the goals into our culture and practices. Some of us are focusing on international programs, some on bridges with K-20, etc. Mike Pickett says that while the rhetoric may be the same about seeking competitive advantage, we want to make sure that IT is not perceived as a competitive disadvantage.

UMN has rolling 6 and 2 year plans, and then work on quarterly work plans, where they try to focus on the planned vs. unplanned activities. Trying to manage an IT investment portfolio and bring everybody into the conversation. Project selection criteria include the kind of project, what value it brings to the institution, and how its financed. Focused on strategic and operational priorities.

What criteria do you need to have in place to make a decision? definition, functional owndership, business case, and finance plan. Some projects are in planning and development phase where these things are not yet clearly understood. How do those get decided and resourced? Iowa says those that have strong champions get resourced. At Brown they have a committee chaired by the CFO – everything that’s over $50k or is a new service is supposed to come through that group.

Looking at Risk – Org and Tech readiness, architecture fit, definition is well understood, infrastructure compatibility. Looking for Value on Investment – look at over 5 year term. Looking to figure out how to shrink effort on non-strategic work and increase resources available for strategic initiatives. It’s an art form with a political calculus.

Joel says this is less about the org chart and more about the real relationships with people so everyone really understands their role. John from Duke notes that lemmings are perfectly aligned – sometimes you want to see a diversity of approaches, like with learning management systems where all the current answers are crummy. Sometimes you need to embrace chaos. Terry agrees that we need to consider alignment and efficiency vs. effectiveness. We don’t have that many arrows in our quiver to gain efficiencies – automation, de-duplication of services, and standardization. How do we try to live in a mode of pushing efficiencies while meeting the ever more disparate needs of our audience? Tracy says that part of the CIO’s role is to balance the gaining of efficiencies with the fact that two years from now people may have money again and will be driving towards flexibility.

Elazar – created new governance structure at UCSF – IT Steering Committee chaired by a faculty member – 5 groups under it. Everything that is substantial in university (including medical center) goes through this group.

Goodbye UW, Hello Chicago!

Last Tuesday was my last day as an employee of the University of Washington.

I’m excited to say tomorrow I start in a new job as Senior Director for Emerging Technology and Communication with IT Services at the University of Chicago. I’ll be part of the leadership team that Klara Jelinkova, their relatively new Chief Information Technology Officer, has put together. I’ve known and admired Klara as a colleague for a number of years now as she’s held increasingly more responsible positions at the University of Wisconsin and Duke before coming to Chicago in March. Klara is one of the new generation of higher-ed CIOs – whip smart, completely grounded in the technologies, but understanding the role that modern IT organizations must play to work with and serve the university. I couldn’t imagine a better person to work for. The other folks I already know in the Chicago organization (Tom Barton, Greg Anderson, Bob Bartlett) are also top notch, and I look forward to working with a whole new group of colleagues.

While I’m sad to be getting ready to leave Seattle, I look forward to getting to know Chicago, a great and vibrant city. It’s gonna be hell on my downhill skiing, though.

I’ll be blogging about my experiences in getting to know Chicago and our work in IT Services as it happens, but I wanted to at least take a brief look back on my 16.5 years at the UW, and all that we’ve accomplished over those years, because over the course of that time we did play a part in changing the world.

It’s easy to forget that in the 1990s computer professionals at academic institutions were busy inventing the future. When I first came to the UW in 1994 it was not generally accepted in industry that internet protocol networking was going to be the way to go, nor that open protocol applications for email and other purposes would be adopted on a wide scale.

In 1994 we were excited about new emerging Internet applications and standards such as Gopher (invented at the University of Minnesota by my colleague Mark McCahill), IMAP (pioneered at Stanford and the UW by Mark Crispin and colleagues) and z39.50. The World Wide Web had been recently invented at CERN, the European particle physics research lab, and the Mosaic web browser, created at the University of Illinois’ supercomputing center, was wowing us with its ability to integrate images, text, and hypertext links in an open way that made it easy to create rich content.

Since that time we pioneered the use of developing technology time and again, we helped convince major commercial interests that the Internet was the way to bring people and business together online (for better and for worse), and we built a large and growing community of technologists and technology users at the UW.

Some of the areas where we can take some credit for being among the first include developing standardizing on IP-only transport on the network, creating a university web presence, building large collections of streaming audio and video, using IMAP as a widespread protocol for email, building web-based interfaces to administrative systems, creating an enterprise web portal before the word was even in use, creating widely-used independent tools for collaboration in teaching and learning, building a GUI interface for searching library resources, having a web-based single-sign-on system, deploying a campus-wide online events calendar, building web services interfaces to enterprise data, and many more.

Recently, we’ve been engaged in projects to really get a handle on how we organize, manage, and budget for IT work at the university. While not as sexy perhaps as some of our past technical adventures, I believe that being organized about how we plan for, manage, and communicate about IT services is a foundational discipline for being effective, agile, strategic, and innovative in supporting the work of the modern university.

The last couple of years have been tough ones in the UW Information Technology organization. It’s no secret that these are not easy times for public universities in general, and Washington’s state budget picture specifically doesn’t look too rosy. Constant cutbacks and layoffs have become part of “the new normal”, as admittedly outsized ambition and reach has been scaled back to a more modest scale.

Throughout all of the years, the people I’ve worked with at the UW have been a wonderful, extremely skilled and talented group. I’m honored to have worked among them, and I’m extremely proud of having played a part in the UW’s efforts over the years.

A Drupal tip: Adding taxonomy vocabulary description to a Views header

This is a how-to tip for Drupal 6, which I’m documenting because I couldn’t find this answer anywhere and it took me a day of scratching my head to figure it out. Drupalistas might find this useful, the rest of you can move along, there’s nothing for you to see here.

I was creating a View that listed all the nodes that have a given vocabulary term assigned to them, where the vocabulary term is passed in as the argument (e.g. http://mysite.myschool.edu/sitename/type/Basic ), where “type” is the path to the View, and “Basic” is the vocabulary term).

I wanted to include the description of the vocabulary term appear at the top of the View. How to do that?

The short answer is to put a short snippet of PHP code in the header of the View. Step by step:

  1. Make sure that the PHP filter module is enabled in the Core – optional section of Modules.
  2. Edit the Header item of your VIew (in the Basic Settings). If you’re using a WYSIWYG editor, make sure your input format is set to PHP Code.
  3. Paste this code into the Header box:
    <?php
    $term = taxonomy_get_term_by_name(arg(1));
    print (filter_xss_admin($term[0]->description));
    ?>
  4. Update the View, then Save it. You won’t necessarily see the result in the Live Preview under the Views menus, but it should work in your site.

The slightly longer story here is that I think there’s a bug in the taxonomy_get_term_by_name() function that makes it so you have to reference $term[0]->description instead of $term->description. I filed that bug on the Drupal.org site at http://drupal.org/node/812164.

Hope that helps other folks besides me – leave a comment if it works or doesn’t work for you.

Levi-Strauss, remix culture, and mining the rock ‘n’ roll past

Logic Studio screenshot
Last week Wet Paint, my old band from the 70s, got together to play a college reunion gig in Bellingham. Great fun was had by all, and I think the band sounded better than it ever had.

Leading up to the gig I digitized our 1978 single from vinyl, and then I decided to try my hand at doing a remix of one of the sides, Steve Robinson’s very cool Shake A Maraca.

Doing a remix is an interesting process. Starting with the original tracks you visually slice and dice them into parts, adding various levels of audio processing to them, and then combine them with other audio. The tools for digitally manipulating music these days are nothing short of astounding in their power (and complexity). I used the latest version of Apple’s Logic, version 9, but there are a variety of competing tools.

Logic comes with a vast array of software instruments and pre-recorded snippets (known as “loops”) which can be utilized at will, and you can import audio from any other source you can find. So the process of the remix involves sifting through a huge library of available sounds and grooves, and trying to figure out what’s useful to the task at hand, and using those pieces to build up what hopefully becomes a compositionally coherent whole.

That got me thinking about the late Claude Levi-Strauss’ writings on “bricolage” in traditional cultures. Bricolage literally means “tinkering”, or as Wikipedia defines it, “to refer to the construction or creation of a work from a diverse range of things that happen to be available, or a work created by such a process”.

Levi-Strauss wrote about the use of bricolage in the construction of myths in indigenous cultures, saying:

The set of the ‘bricoleur’s’ means cannot therefore be defined in terms of a project… It is to be defined only by its potential use or, putting this another way and in the language of the ‘bricoleur’ himself, because the elements are collected or retained on the principle that ‘they may always come in handy’. Such elements are specialized up to a point, sufficiently for the ‘bricoleur’ not to need the equipment and knowledge of all trades and professions, but not enough for each of them to have only one definite and determinate use. They each represent a set of actual and possible relations; they are ‘operators’ but they can be used for any operations of the same type.

which sounds a lot like the current way music is built up digitally. He recognized that the results of the bricoleur’s technique “can reach brilliant unforeseen results on the intellectual plane,” which I think is completely true of using musical remix techniques, which can often bear only the slightest resemblances to the original source material.

Some of my old fogey contemporaries question whether the technique of building up new musical art by reassembling and manipulating digital pieces is as valid as making music by playing a traditional instrument. Get over it! While I personally will always treasure the pleasure of my hands and ears interacting with strings and wood, I don’t think that any one method of achieving sound necessarily holds any more validity than another – it’s what you can do with the tools that matters. I’m sure if I was just starting out with music, I’d be spending a whole lot of time in front of my computer mastering these tools.

All of which seemed relevant this week with the news of the Rolling Stones release of a remastered Exile on Main Street complete with ten new tracks, some of which had some vocal and instrumental parts finished this year. I’ve always loved Exile (though I think Beggars Banquet is still my favorite Stones album), and having just been spending this time mining my own 30-year-old past for a remix, who am I to question whether Mick and Keith should delve into their own unfinished creations? While I haven’t given the new material a good listen, I did really enjoy the All Songs Considered interview with producer Don Was on the project, and the pieces he played during the interview sounded great. If I had a back catalog like the Stones, you can bet I’d be spending time revisiting it – and it sounds a good deal better than any of the Stones’ new material has in some time!

I also think that the bricolage approach has a lot of relevance to software engineering and how we manage IT, particularly in higher education, and I’ll have more to say on that in a coming post.

[CSG Spring 2010] SaaS requirements for higher ed

Tracy Futhey is leading a conversation on SaaS requirements for higher education.

Spent summer gathering docs on shared services from various campuses. In August started looking at email and hosting. Engaged a team from NACUA in October. Came up with email Issues matrix in November and worked out a model contract in March and a draft RFP model in April.

Strategies adopted by sub-team
– Avoid hardcore Technical Requirements list. (outsourcing service/function is not dictating technical solutions)
– Recognize/Leverage limitations on free services (build RFP with expectation of payment for services)
– Assume reuse; organize materials accordingly
– Admit Rumsfeld was right: “there are also unknown unknowns, the ones we don’t know we don’t know”.

Issues spreadsheet – five big issues – Data Stewardship, Privacy, Integration, Functionalities, Service Level

Working with Educause to distribute as open source documents.

What may be next?
– Assess interest in glomming RFP (CSG + …?)
– Finalize plan for Educause to hold docs
– Issue common RFP in June/July?
– Responses in August?
– Campus discussions in fall? Vendor negotiation? (not clear vendor(s) will be responsive to our concerns, or that we will like the responses)
– Decisions by Jan 1, 2011?
– Pilots during spring 2011?
– Fall 2011 go-live dates?

[CSG Spring 2010] Service Management – Service Lifecycle Cradle 2 Grage

Romy Bolton (Iowa) and Bernard Gulachek (Minnesota) are talking about service lifecycle.

At Minnesota they think a lot about service positioning – not to just react to perceived need. An unquenching appetite with limited resources is not a good recipe. Tried to apply a general administrative services framework for the institution about where services should be placed along a continuum from distributed to centralized. Developed principles and examples to help communicate with people in the distributed units.

At Iowa they started “Project Review” process in the late 90s. Tuesday afternoon meetings – employee time with the directors and CIO. Open to everybody. Re-tooled project framework in 2007, service lifecycle management in 2008. Light ITIL framework

Emphasis on service definition, publication, end user request, provisioning. They still have project review, plus a project called Discovery to explore ideas, ITS Spotlight to call attention of staff to services. IT admins on campus have regular monthly meetings with 100+ people. Beginning to work on Do It Yourself provisioning tool.

Service definition starts in project planning phase
– identify service owner and provider
– identify KPIs for service
– Reassess risks and cost-benefit for service
– Identify critcality of service on scale of 1-4
– Update 5 yr TCO and funding source
– Document service milestones
– Update status in ITS Service Catalog as appropriate

Iowa uses Sharepoint as intranet and for publishing their service catalog and Drupal for IKE (their knowledge management site). They’re just building out the self-provisioning service.

Tom Barton notes that there’s something called a Service Provisioning Markup Language – sort of languishing, but maybe some new energy is flowing into it.

Iowa – triggers for Service Review: User needs; environmental change (e.g. the cloud for email); financial; security event; hardware refresh; new software version; end of life for product. Review is not a small effort. Business and Finance office helps gather info. Includes: Service Overview, Customer Input, Financial Resources, Utilization and customer base, service metrics, market analysis, labor resource, recommendations. Owned by the senior directors.

At Minnesota they do annual service reviews of all of their common good services – “just began to enforce that”, in part borne out of frustration at not being able to sunset services. Two or three people focus on this, working with service owners. The current example is what services continue as they roll out Google Apps.

Service Performance and Measurement

Designed for strategic conversations with stakeholders that go beyond the operational. Began gathering availability data about a year ago – looking at whether services are alive. Klara notes that defining whether a service is up can be complex, but that it can be easier to measure simply whether a user can access a service. They have a systems status page showing current status – mixture of automated and human-intervention. Using Cisco’s Intuity product to track monthly/annual measures. They give roll-ups of info to deans and IT leaders. Include benchmark comparisons with Gartner or Burton benchmarks if available. They publish the cost of services annually, so they understand what they’re paying for and how that’s changed over time. http://www.apdex.org is a new alliance for understanding application performance measurement.

At Stanford they’ve established Business Partners – senior people who know the organization who act as the pipeline in to the service managers. They meet with clients at a senior level.

[CSG Spring 2010] Service Management – CIO Panel

The morning is all about service management topics. My notes are going to be pretty sketchy because I’m coordinating the workshop and giving several presentations, but I’ll do my best and put up the slides from my parts.

Klara (Chicago) notes that culture is key in trying to implement service management. Steve (Iowa) agrees. At Iowa they built lightweight service and project management frameworks because that’s what the culture would tolerate. It’s a trigger-based process. Different events are recognized by service managers or owners and then initiate a review of a service. They put a lot of accountability on the service owner – they have to bring the right metrics forward. The review process gives them a chance to have some oversight of those metrics.

Bill Clebsch (Stanford) – doesn’t like ITIL or anything that looks like it comes from the outside to tell the organization what to do. So tries to talk first about accountability – that’s how they brought time tracking into the organization. Before that they did metrics – “a star performer’s best friend”. Put up customer-facing metrics, work they did with MIT. That was foundational to moving culture to more of a performance orientation. They’re a big Remedy shop. Every help desk at the university runs through their Remedy. Often people’s only knowledge of the organization is the service desk, so that’s a good place to start. Now working on change management. Remedy is good if you want to drink the kool-aid. They started a service portfolio effort about three years ago. Budget cuts are the best friend for getting these things done – makes your own organization aware, and makes your clients aware that they can’t behave in aberrant ways. Setting ambitious goals is good.

Kerry (Carnegie-Mellon). In addition to culture, timing is key. Service portfolio effort started at CMU in the central IT organization when Joel first became CIO – didn’t understand what services were being provided. Was beginning to have success when an external advisory board visit – CMU was growing from being a start-up to a global enterprise. Changed the conversation. “Who is responsible for a service” was a hard question. Started answering by “whoever Kerry or Joel calls to fix it.”

Bill – every year do an extensive client survey, and scores have gone way up in recent years, as have metrics and employee surveys. Having the organization much more outwardly focused matters as much as the data. Sense of ownership is huge.

Klara – Chicago is not as mature, yet deans still want to give things up to IT.

Steve – the role of technology is changing, which makes people more willing to cede control of it to the central IT group. Bill – when things get boring or risky, hand off to central IT.

A question about the relationship of project management to services. At CMU the transition from project to service was difficult because they didn’t yet know how to declare a project done. They’re now paying a lot of attention to review of projects and transition to services. Klara – important to be mindful about how to operationalize a project – bring in the other stakeholders like service desk, operations, etc.

Question about how to decide to stop doing services – Steve- service reviews help, when utilization is declining and other alternatives exist, then there can be a project to shut down the service. Bill – have a dedicated service portfolio team that looks at what services should be brought up and shut down. They have actually shut down some services. Team is made up of service managers, some directors, some executive directors. We’re moving into an era of being more service brokers than providers, and will do less provisioning. That will require a different kind of service managers. They have a few people in the organization who are explicitly service managers, with no other role.

Question about cost of services. At Iowa they allocated all of the IT costs to services – it was a lot of work, but the data was very interesting and started good discussions. In the process of trying to automate that. Tension between being efficient and being able to invest to help research and teaching to be better.

Time tracking is essential to doing costs of services.

Critical to not let the perfect be the enemy of the good. Shel notes that Bell Labs decided to got to activity-based accounting and four years later the internal accounting department had grown to 450 people.

Shel – you have to make the judgement on what your allocation model is for given services. You may not make the perfect decision, but you need to decide.

[CSG Spring 2010] Storage Strategies

Storage strategy survey results. Storage management is equally distributed between central IT, distributed, both, or not sure.

What’s provided centrally? All offer individual file space. Most offer backups for distributed servers and departmental file space. Half offer desktop backups.

Funding models – just about all have some variety of pay for what you use. Most have some common goods, and about half have base plus cost for extra.

About half do full cost recovery including staff time.

Challenges – data growth is top, tiered storage is next, along with centralizing and virtualization.

Biggest explicit challenges : Data growth, perception of cost, research storage.

Storage at Iowa
Central file storage: Base entitlement, individuals 1-5 GB, depts, 1 GB per FTE. 4 hour recovery objectives. 99.97% uptime. 89% participation. Enterprise level, high availability.

One price fits all network file storage, offered some lower-cost network storage, e.g. without replication or backup, now they’ve got lowest-cost bare server storage – lots of enthusiasm for that model.

http://its/uiowa.edu/spa/storage/

Low cost SAN for servers $0.36 – $1.68 per year, depending on service level. Recovery is hw and sw, no staff time or data center charges.

Storage Census 2010

51% of storage being used by research. 35% Admin and Overhead (including email), 11% Teaching, 3% Public Service.

72% of storage is backup vs. online.

Next steps: identify and promote research solutions; build central backup service; build, promote archival solutions.

Storage @ U VIrginia – Jim Jolkl

Hierarchical Storage Manager Services: Storage for long-term research data (centrally funded but not well marketed); Library materials (funding via Library contributions to infrastructure); RESSCU (off-campus service for departmental disaster recovery backups).

Enterprise Storage – Based on Netapp clusters. NFS, CIFS for users, ISCSI, SAN internally. Works really well, highly reliable, replicated. Mostly used for central services. For departments it’s $3.20/GB/yr to $3.50 without backups. Lots of incidental sales to people who want a gigabyte or so for additional email quota. Doesn’t work for people who want a lot of storage.

New mid-tier storage service – focus on a reasonable and affordable storage service for departments and researchers.
Requirements: reliable, low cost, low overhead, self service. Unbundled services – optional remote replication and backups. Access via NFS and CIFS. Snapshots – users deal with their own restores. Offering Linux and WIndows versions. Doing group files based on their groups infrastructure. Using RAIDKING disk arrays. Using BetterFS on Fedora, Windows server for the windows side.

Cost model – 1 hour plus $0.34/GB/yr (raid5, but not replicated). Next year expect to drop price by 50%. Currently about 22 TB leased on NFS and only marginal WIndows use to date. All of the complaints about the costs of central storage have gone away. Research groups interested in buying big chunks.

Shel Waggener – Berkeley Storage & Backup Strategy

Shel says scale matters and no matter who says they’re doing it better faster cheaper, without scale they’re not.

2003 – every department runs own storage – including seven within central IT.
2004 – data center moves creates opportunity for common architecture
2006 – dedicated storage group formed. No further central storage purchases supported except throuh storage team.
2007 – Hitachi wins bakeoff. 250 TB. Email team works with storage group to move from direct-attached to SAN
2010 – over 500 hosts using pool – 1.25 PB expanding to 3 PB this year.

SAN-based approach. Lots of serial attached SCSI disk – moving away from fiber-channel.

Cheapest storage is now 25 cents gigabyte per month. The most expensive tier (now $4.00/GB/Month) bears the cost of the expensive infrastructure that the other tiers leverage.

Failure rate on cheap disk is reliable, but recovery time is longer.

At the cost of storage, they don’t have quotas for email.

One advantage is paying for today’s storage today. Departments buy big arrays and use 5% in the first two years, which is much more expensive. But that’s what’s supported by NIH and NSF.

Backing up 338 users’ desktops (in IST) takes up 1.3 TB.