[CSG Fall 2008] Collaborative Applications and Infrastructure Panel

My notes are spotty here because I’m on this panel Klara is talking about the challenges they face at Duke – they had education materials about how faculty could use Web 2.0 systems, but then it became clear that faculty wanted the institution to provision those spaces in similar ways. Now they are dealing with … Continue reading “[CSG Fall 2008] Collaborative Applications and Infrastructure Panel”

My notes are spotty here because I’m on this panel

Klara is talking about the challenges they face at Duke – they had education materials about how faculty could use Web 2.0 systems, but then it became clear that faculty wanted the institution to provision those spaces in similar ways. Now they are dealing with the tension between the need for privacy and security and desire for openness (need both within the same project). Faculty need collaborators outside the institution. Policy issues are tricky, as are identity and access management. Carrying the artifacts forward is an issue.

Shel is talking about some of the work from Berkeley on their Draft Strategy for Campus Collaborative tools. It’s tough to get hands around the problem space, so they did a research project on that. Ultimate goal was to make collaborative activities as easy as using email or phones. But the space is changing quickly – how are we to help faculty do better with collaboration?

Original idea: a campus toolkit that would be a family of collaborative tools – a mashup of mashups. That idea got trashed by the campus – “can’t ride a tornado even if you try”. Every person uses a different set of tools.

Their motto now: “Embrace the chaos”. Can’t fight or control it, so need to tap into it. Invest in infrastructure that allows that embrace as easily and securely as possible. Toolkit will be around guidance to the community – policy, privacy, and security.

Goals: provide enhanced identity management services; make it easier to use and share data in collaborative tools; train workforce to work with and support these new collaborative technologies (privacy guidelines); establish a common framework and vocabulary for defining support for collaborative tools.

Application folks are doing analogous stuff with SOA and web services. The infrastructure folks are hell-bent to keep this from happening – what about building bullet-proof reliable services?

Students expect to be able to easily shift contexts and identities – want some stuff that’s Berkeley-branded, but not always.

In outsourcing infrastructure for Bell Labs, Shel learned that in trying to impose all of the eventual conditions on the outsourced vendors that they eliminated all of the possible advantages and cost-savings. As we have gone into perpetual Google beta-land, people’s expectations are changing.

Privacy expectations are also changing, and we have an educational role to play there.

Need to help educate IT staff not in the central organization. Example of a dean who decided to move learning apps to Facebook while the IT staff in that unit was in the process of developing new apps for Sakai.

Expect to adopt report by January.

Lots of discussion about policy issues and what needs to be retained, and what to do with access requests.

I got people talking about the concepts of scholarly social networks.

John is talking about collaborative infrastructure are Brown. They’re trying to unite applications around Mace Grouper, using course memberships. Faculty have been frustrated by the difficulty with the edge cases – where people are in fact participating in courses that the central system doesn’t know about, departmental staff that have roles in courses that the student system doesn’t know about, etc. They added a schema for each course with roles that extend the official registration. Faculty members “pretty well like it”.

Faculty are largely unaware of the services available to them, and they expect last minute setup, including provisioning. Building a faculty gateway. Allows them to see a list of services they can enable for each course. Allows faculty to see and specify who’s in a course, vagabonds, etc. Found that the UI for Mace Grouper is a little beyond many faculty.

Advertisement

[CSG Fall 2008] Social Learning – Geri Gay

Geri Gay is a Cornell faculty member in Communication and Information Science. “Scaffolding” seems to be a new term I’m hearing a lot today in the context of e-learning. Social learning project – tracking activities using computers and wireless devices, and having people keep journals. Out of diary studies you find words like access, convenience, … Continue reading “[CSG Fall 2008] Social Learning – Geri Gay”

Geri Gay is a Cornell faculty member in Communication and Information Science.

“Scaffolding” seems to be a new term I’m hearing a lot today in the context of e-learning.

Social learning project – tracking activities using computers and wireless devices, and having people keep journals. Out of diary studies you find words like access, convenience, freedom, no constraints, social connections, at the beginning of the courses. Freedom from constraints of space and time. At time 2, words like temptation, distraction, addiction, social connections, start showing up. 3-7 hours per day online, 60% of time on email or instant messaging. Diverting attention from classroom activities and relations.

Done some studies on divided attention – recall and retention tests with open and closed laptops. People can read NY Times if they look at headlines while they’re listening, but if they start reading the articles in depth, then they can no longer multitask. People seem happier in classes if they feel that they have access to the world outside during class.

Divergent communities – instead of building learning communities, the wireless technologies may have diverted the class from its original goals.

Women spend more time in social activities, men spend more time in sports and finance.

Dynamic feedback study – what does a computer do well, what do people do well? In computer communication fewer opportunities for interaction, trust may be reduced, bridging differences can be harder. How do we train online collaborative skills?

Working on feedback for guided reflection. Track agreement words that people are using, and reflect them back. Developed something called Groupmeter.

Working on how to improve informal interactions – working on awareness technologies, faces looking full face are available, profile means don’t interrupt.

They’ve been mapping social interactions – teams that reach out to other teams do better – more creative, more ideas, better grades.

Context-aware computing – devices that can gather information about the physical environment, and also annotate them. Example is cultural implication of new technology in museums. Built sensors, looking at density of people, density of info activity, tempo of movement (physical and virtual), etc.

Who is here? Where should I go next? What are my peers excited about? Who might be interested in this?

Looking on expertise vs. informal commentary in tagging museum objects on mobile devices.

Did recall and retention tests on school kids comparing whether they had interactive devices in museum vs. curator narration – significantly better with the interactive device.

Enabling this vision requires re-inventing learning substantively, not only the how and when of learning – communicate, deal with information I/O, work, design and build things, conduct research, deal with the environment, do commerce.

[CSG Fall 2008] Collaboration and Social Technologies – eLearning

I’m at Cornell for the Common Solutions Group Meeting. First part of workshop will deal with e-learning, afternoon with collaboration tools. Anne Moore from Va Tech is talking about categories for thinking about evaluation of success for learning technologies. She starts by talking about the 1999 National Academies report on Being Fluent in Information Technology. … Continue reading “[CSG Fall 2008] Collaboration and Social Technologies – eLearning”

I’m at Cornell for the Common Solutions Group Meeting.

First part of workshop will deal with e-learning, afternoon with collaboration tools.

Anne Moore from Va Tech is talking about categories for thinking about evaluation of success for learning technologies. She starts by talking about the 1999 National Academies report on Being Fluent in Information Technology. One point they made is when you look at critical thinking and sustained reasoning, you need to look at those skills in an environment that is technology assisted, but in a domain. The report hasn’t been largely read or applied.

Being able to assess higher level of skills is becoming more important due to the emphasis on accountability. We’ve been more focused on inputs, rather than outcomes – academic institutions are largely still not focused on being able to demonstrate learning outcomes.

Joel Smith from Carnegie Mellon talks about the Open Learning Initiative at CMU. He call this Scientifically Informed Digital Learning Interventions.

The challenge is to design and build fully web-based courses which by rigorous assessments are proven to be as good or better than traditional teaching methods. There are multiple ways of building those courses.

Why? Increased access, improved effectiveness, providing flexibility, contain costs.

The current structure of higher ed presents substantial roadblocks to the application of proven results and methodologies from the learning sciences. We depend on individual faculty to develop courses – before we had lots of info from cognitive and learning sciences teaching may have been more of an art than a science. But it’s not fair to saddle each faculty with having to know all that cognitives science. There’s an opportunity in e-learning interventions, developed by collaborations of experts, to embed the knowledge of learning sciences to make the practice more effective.

OLI Guiding Assumptions:

– Digitial learning interventions can make a significant different in learning outcomes.

– Designs grounded in learning theory and evaluation have the best chance of achieving the goal.

– A possibe, acceptabe outcome is failure or mixed failures and successes – not promoting technology for its own sake.

– Formative assessment iwll be a major feature (and cost component, like 40% of budget) of designs and improvement of courses.

– IT staff working with faculty is too limited a partnership – learning scientists, HCI experts, and assessment experts must be part of design, development, production, and iterative improvement.

OLI courses are available in http://www.cmu.edu/oli . Don’t expect an “OCW experience” this project has a different set of goals than OCW. These are full courses, designed for real learners. “Clicking around” will be unsatisfying: these interventions are designed to support a novice learner in acquiring knowledge workin on their own.

Key elements in OLI courses:

– Theory based –

Builds on prior informal knowledge. We know that building on informal knowledge helps people learn faster. Example is an economics course that has exercises that builds on student knowledge of markets based on eBay. Includes cognitive tutors that have just a few node trees on giving feedback of correct or incorrect decisions.

Provides immediate feedback in the problem solving context – midterm and final is hardly immediate or rich.

Promote autheticity, flexibility, and applicability. Real world problems, which are messy and not clear-cut, is much more effective in promoting better learning outcomes.

– Feedback loops (The killer app) – courses record student activity for robust feedback mechanisms. Can feed info back into database or to faculty – this can change the nature of education. Can also give feedback to course designers and learning scientists.

There are papers and evaluations of outcomes on the OLI web site. One example is in statistics – in the first iteration the online students did as well as the students in the traditional course which itself had been worked on for ten years with cognitive scientists. Then they taught the course in a blended mode (meeting with faculty once a week, using OLI as the textbook) in half the time. Students (randomly selected) showed significantly greater gains than the traditional course. Now considering teaching all of the sections that way.

Courses are instrumented to provide instructors with lots of feedback. Faculty can be far more effective when they know what concepts the students are getting and where they’re having problems. The vision is to have a digital dashboard for faculty and students.

“Improvement in post-secondary education will require converting teaching from a ‘solo sport’ to a community-based research activity” – Herbert simon

Deborah Heyek-Franssen, from Colorado is talking about Carts & Horses in the Collaborative, Social Space. Technology is the cart, pedagogy and content should be the horses.

The basics – understand elements of learning, articulate content goals, find pedagogical method, and choose appropriate tool.

Some elements of learning –

Working memory – limited, seven “chunks” at a time. What does this mean for pedagogy? Chunking activity and keeping working memory available for learning. Can technology help? It can, but it can also harm it – e.g. slides with gratuitous images and animations. Cognitive load of looking at images or simulations is lower than reading about it.

Engagement – students get engaged in challenging, complex, multidisciplinary tasks involving sustained amounts of time. What does it mean for pedagogy? designing in and out of class activities that engage, including lecture and readings. Collaborative and social tools can help engage students.

Motivation – what motivates students? building motivation into course – rewards for desired activities.

Reflection – explaining and then critically evaluating own and others explanations. Wikis and blogs can help reflection.

Building on past knowledge – students now have opportunity to build global knowledge – e.g. wikipedia.

Deb notes that simulations can be addictive, and Greg comments that addiction doesn’t equate with learning. While that’s right, it seems to me that learning is at least more likely to occur if you’re highly engaged.

Shel notes that in research universities, even when faculty really want to teach, they’re mostly consumed with their research and even when we have tools and staff resources to help them, they’re not particularly interested in spending the time to work on really improving course methodologies. Joel notes that it works much better to engage with whole departments at curricular levels rather than individual faculty.

Cliff notes that collecting lots of real-time data on student activities has a creepy element about it and wonders about what the policy and privacy issues are. Joel says that there’s at CMU there’s an opt-in informed consent form they can assent to. And feedback is not granular at the level of individual students.

Greg says that the problem with assessment is whether or not people will make any changes based on the assessment, and if we don’t have institutions that make changes based on data then it may not be worth spending money on assessment.

Shel says they did a survey of large courses and 80% of students were using Facebook, but only 20% were using Sakai. So when they put the courseware into Facebook, the students didn’t use it there either.