[ECAR Summer 2008] Gwen Jacobs – The (Neuro) Science of Learning

Gwen’s talk is subtitled What do we know about how the brain learns that can inform and improve pedagogy? What do we know about the brain that might be useful? Experience and learning change the physical structure of the brain, which organizes and reorganizes brain function. Different parts of the brain may be ready to … Continue reading “[ECAR Summer 2008] Gwen Jacobs – The (Neuro) Science of Learning”

Gwen’s talk is subtitled What do we know about how the brain learns that can inform and improve pedagogy?

What do we know about the brain that might be useful?

Experience and learning change the physical structure of the brain, which organizes and reorganizes brain function. Different parts of the brain may be ready to learn at different times. Learning continues throughout life.

Four examples

Language learning. Two parts of learning language, which begins right when you’re born: perceptual part (hearing and perceiving) and production (practicing to make songs and words). Human language acquisition is mimicked in bird song – every step of the way. The babbling phase is practicing. Birds and humans learn the songs that they hear. As native language skills improve, perception of other languages decreases, so it becomes harder to acquire other languages later in life. Both learn better with a live tutor – social interaction is important to learning. In birds learning stops at sexual maturity – language ability decreases after age 14 in humans.

Why is it so hard to learn new languages as you get older? As you learn and focus on your native language you gradually lose the ability to perceive other languages. Example of experiment with Japanese speakers who can’t perceive difference between “ra” and “la”. Language area in the brain of bilingual speakers is enlarged.

What’s going on in teenager’s brains?

Different parts of the brain mature at different times (Toga’s work at UCLA). The wiring of the brain changes just prior to the onset of puberty . Sensory motor parts mature first, language and spatial reasoning during ages 6-12, frontal lobes (reasoning, decision making) mature last – not till age 20. If you look at the brain regions responsive to emotions like fear or anger, they are gradually switched to those for reasoning.

Sex hormones can change brain structure and function. In songbirds males are the ones who learn to sing. If you give females testosterone they can learn to sing – they develop “male” brain structures. There are studies suggesting that there are gender differences in human brains.

Experience continues to modify the brain – learning takes place throughout our lives.

A study that looks at brain imaging in London taxi drivers. Asks them to remember a route – taxi drivers have a very large hippocampus, which is involved in short-term memory and navigation. Being a musician throughout life actually improves your cognitive abilities in many other areas. People who play checkers, Scrabble, Sudoku, etc – there’s a lot of evidence that improves ability to solve other problems. Deaf individuals learn language through signing – turns out that same brain regions are used for language, no matter what the sensory modality.

Our students – how can we engage them? Given their current experience, multitasking all the time, how can we engage them in the classroom?

Active learning = paying attention

Science of learning – National Science Foundation. Goals: advance frontiers of all the sciences of learning through integrated research. Started in 2004. Six centers funded so far, with very different explorations underway.

Temporal dynamics of learning – a distinct region of our brain responsible for remembering faces. There’s a difference between categorizing an object vs. categorizing the object as something you can recognize and name. Musicians activate different brain regions when they look at music notation – musicians are better at multitasking than non-musicians. Individuals with autism or those with Asberger’s have a hard time recognizing and making facial expressions – turns out they don’t use that part of their brain. UCSD center has developed a game that helps them recognize expressions.

LIFE center – learning how a toaster works from a video. People learn much better from a point of view camera vs. a side camera. Social homework game – kids train an AI agent to answer questions. Students learn and retain more from training these agents.

CELEST center – neuro-morphic engineering. DIVA – A model for speech. Looking at how we learn to make motor movements to create speech. Created a model for reproducing speech from listening.

There’s more, but I’ve got to leave to catch a shuttle to the airport…

[ECAR Summer 2008] Vernon Burton – Keeping Up with the E-Joneses in Humanities and Social Science Computing

Vernon Burton is Professor and DIrector of I-CHASS at the University of Illinois at Urbana-Champaign. Our culture makes us look at things differently – humanities and social sciences vs. computer science. When they talk they think they know what each other are saying, but they often don’t. He started out using computers in the 1970s … Continue reading “[ECAR Summer 2008] Vernon Burton – Keeping Up with the E-Joneses in Humanities and Social Science Computing”

Vernon Burton is Professor and DIrector of I-CHASS at the University of Illinois at Urbana-Champaign.

Our culture makes us look at things differently – humanities and social sciences vs. computer science. When they talk they think they know what each other are saying, but they often don’t.

He started out using computers in the 1970s to analyze census and tax records in a large southern community – was banished to running his jobs between midnight and 4 am.

Narrative historians in the 70s despised the computer. Even though historians have moved away from quantitative techniques, ease of use of software and information is encouraging students to search out quantitative methods. In the last decade humanities computing has come into its own. The web has provided access to lots of the humanities record. In 1993 the humanists who saw the Mosaic browser went wild.

He notes that there’s a great need for cyberinfrastructure in the arts and humanities, but that departments can’t afford to build it themselves. So it makes sense for there to be hubs of this activity – one is I-CHASS that he directs.

He cites an example of one of his grad students who’s studying the gentrification effect of gay and lesbian couples moving into neighborhoods – using digital mapping technologies with census data to study that. He says that’s the future of research in the humanities, but we’re not training historians for that kind of research.

He talks about SEASR – http://seasr.org/ – which provides cyberinfrastructure for the humanities.

inscriptifact.ncsa.uiuc.edu/

Cartography of american colonization database

Unicorn: toward enhanced understanding of virtual manuscripts on the grid. – he’s got real questions about the value of the grid efforts – not driven by the scholars themselves. The scholars don’t even have enough support for the use of normal technologies. But this is a model project for a grid use in humanities.

ageoflincoln.com – his book, which contains “augmented reality” 3d content.

HistorySpace – information rich virtual environments for historical scholarship.

Enhanced Knowledge Discovery for Social Science – representing the views of underrepresented populations. Tools for automating data analysis to pull quantifiable data from multiple sources.

E(d)2 – Emancipating Digital Data: Scanning and Image Analysis of the Lincoln Papers – http://isda.ncsa.uiuc.edu/lpapers

Vernon makes the point that what won the world wars weren’t the generals, but the farm boys from the Midwest and South who knew how to make things work with bailing wire when needed, and that’s what we need for humanities and social sciences now – folks who work between the scholars and the computer scientists to make things happen. We also need to develop new models for publication and sharing of knowledge.

[ECAR Summer 2008] Rudolf Dimper – Cyberinfrastructure at the European Synchotron Radiation Facility and Its Impact on Science

Rudolf is head of computing services at ESRF. Observations today done with sophisticated instruments – including synchrotrons. A synchrotron is a super microscope to examine condensed matter. Operate from the UV to the hard x-ray spectrum. Syncrhotron light – used because it has remarkable properties: brilliance (1,000 billion times brighter than a hospital x-ray tube). … Continue reading “[ECAR Summer 2008] Rudolf Dimper – Cyberinfrastructure at the European Synchotron Radiation Facility and Its Impact on Science”

Rudolf is head of computing services at ESRF.

Observations today done with sophisticated instruments – including synchrotrons. A synchrotron is a super microscope to examine condensed matter. Operate from the UV to the hard x-ray spectrum. Syncrhotron light – used because it has remarkable properties: brilliance (1,000 billion times brighter than a hospital x-ray tube).

The European light source is a 6 GeV source in Grenoble. Concentrates 10,000 researchers and engineers. Cooperation between 18 countries. Annual budget of ~80 m€.

Some applications – studying the structure of spider silk; medical applications, including angiography that gives better results than conventional hospital techniques; Geophsysics, sutdying samples which undergo extreme changes; Chemistry, how do catalytic processes function; Semiconductors; Paleontology, high res microtomography of fossils.

52 Beamlines, 6222 user visits in 2007. 15,308 eight hour shifts scheduled for experiments in 2007. > 1500 peer reviewed publications/year.

Over last 10 years the data volume has increased by a factor ~300. In 2007: 300TB, ~1*10E8 files.

Storage policy is only to keep 6 months of data, even for internal users. This is under heavy discussion.

French network infrastructure has not increased bandwidth in three years. That’s a problem

Network based on Extreme Network switches, storage on NAS systems, StorageTek tape for offline. Commodity clusters in teh data center, ~400 CPUs (totally insufficient).

ESRF Upgrade Programme – 290 m€ programme.

Single most productive facility producing protein structures for the Protein Databank.

New methods: nanobeams & raster scans. Looking to increase resolution by two orders of magnitude. Petabytes of data – how to carry away the data. Easy to imagine a PB per day in 10 years, contrasted to 15 PB per year for the LHC.

Two fundamental problems:

Latency diminish teh time to measure, store and analyze data.

Add functionality – new ways to measure, store, and analyze data. “Have to get our hands dirty with grid tools”

Looking desperately for 100 Gbps network.

I/O bottlenecks in research clusters is a big issue.

ESFRI position paper on digital repositories – lots of storage and access policies.

[ECAR Summer 2008] Heidi Hammel – The future of exploration in astronomy

Heidi Hammel led the Hubble Telescope team that looked at the Shoemaker-Levy asteroid impact and works at the Space Science Institute in Boulder (though she lives in Ridgefield CT). Telescopes – devices for gathering light. Refractive telescopes use lenses. Reflective telescopes use mirrors. Viewing through the atmosphere blurs your image. Adaptive optics helps – e.g. … Continue reading “[ECAR Summer 2008] Heidi Hammel – The future of exploration in astronomy”

Heidi Hammel led the Hubble Telescope team that looked at the Shoemaker-Levy asteroid impact and works at the Space Science Institute in Boulder (though she lives in Ridgefield CT).

Telescopes – devices for gathering light. Refractive telescopes use lenses. Reflective telescopes use mirrors. Viewing through the atmosphere blurs your image. Adaptive optics helps – e.g. the hexagonal segments of the Keck telescope mirror which can adapt at 90 Hz. So why put a telescope in space? Clouds, but even clear atmosphere distorts light. Even worse, it absorbs light.

More to light than meets the eye. Earth’s atmosphere absorbs UV and infrared and some radio. Adaptive optics not suited for all visible wavelengths.

James Webb space telescope – 6.5 m “mirror” (adaptive). 3 cameras, 1 spectrometer. Less than half the cost of Hubble ($~ 4.5 B full life cycle). Launch date 2013. About a million miles out, at the L2 point – no way to service it. Collaborators from all over the country and all over the world. Need to coordinate. Needed a versatile platform for distributed configuration and data management – NGIN (Next Generation Integrated Network). Does all project management functions including risk management, action-item tracking, shared files for collaboration, etc. Being expanded and developed. Has kept the project on schedule and on budget for the past three years.

Webb origins science – four themes – first light and reionization; assembly of galaxies; birth of starts & protoplanetary systems; origins of the universe and life itself.

Large Synoptic Survey Telescope – to image entire accessible sky to a deep level with a wide field of view in a fast operational mode. To be built in Chile. Looking for time-variable phenomena in a huge wide swath of sky. Camera is a 3.5 degree sensor. 3200 megapixel camera. Designed to work fast. Six colors, 20k sq degrees at 0.2 arcsec/pixel with each field revisited 2,000 times. ~ 2 terabytes per hour, > 10 billion objects. “The Monster Truck of telescopes”. 6 GB of raw data every 15 seconds.

Fundamental BIG question – what is the fate of the universe? Contrary to previous models (Open, flat, Closed) recent observations of supernova indicate that the universe is accelerating. Some mysterious force is counteracting gravity – call it dark energy. It permeates all of space. As of the last couple of years, discovered 743% of the universe is dark energy (about 23% is dark matter). LSST is to investigate dark energy by taking precision measurements of four dark energy signatures in a single data set.

Another big question – what is our destiny? The military is observing meteor impacts from satellites that monitor large explosions. LSST will inventory the Near Earth Objects population.

Argo – Voyage Through The Outer Solar System

Use Neptune to get to the Kuiper Belt. Why go to Neptune again? Voyager flew by Neptune in 1989 (having launched in 1977). Old technology, besides – everything we can detect in the neptune system has changed in the last 20 years – cloud distribution, stratospheric tempaerature, its ring system, triton’s atmoshpere. etc. We can’t see the details to explain this. Kuiper belt – Pluto and 10,000 of his closest friends. Argo’s access ~4000 times bigger than that of New Horizons (current mission to Pluto). If launched in 2020, will get to Neptune in 2033, Kuiper belt in 2041. Argo team has never met in one place at one time. Entire mission is being remotely planned and executed. This mission is not unique – it’s emblematic of a new mode of operation.

Space Science Instituted – 501(c)(3) formed in 1992 in Boulder to enable world-class research in space and earth science. Heidi is the director of the research group, Also there’s a flight ops group that runs the Cassini spacecraft, and there’s a large education and public outreach group that builds museum exhibits. 30-50% of research staff distributed nationwide. Off-site from inception of ssi, for over 15 years. She quit MIT when they told her she had to sit in her office 5 days a week in Cambridge, when she lived in Conneticut. The off-site option offers significantly reduced grant overheads when compared to universities. Growth management is a challenge – lots of people want to work this way. Many young scientists are leaving (or not going to) academia for these kinds of alternatives.

[ECAR Summer 2008] Kevin Trenberth – NCAR – Global Warming Affects us all: What must be done?

The IPCC report stating that warming of the climate is unequivocal and very likely caused by human activities was a remarkable demonstration of the strength of the evidence – passed by 130 nations. Increasing CO2 – has a lifetime of 100 years before it gets taken out of the system. US continues to increase – … Continue reading “[ECAR Summer 2008] Kevin Trenberth – NCAR – Global Warming Affects us all: What must be done?”

The IPCC report stating that warming of the climate is unequivocal and very likely caused by human activities was a remarkable demonstration of the strength of the evidence – passed by 130 nations.

Increasing CO2 – has a lifetime of 100 years before it gets taken out of the system. US continues to increase – 20% increase since 1990. China now represents a big and growing percentage of CO2 emissions. If we make gains in western world, will they be overwhelmed by the emerging world? There is pressure to look at emissions per capita rather than by nation. Western Europe is 2.5 x better than US – is that due to higher gas prices? Highlights the fact that population is a big part of the equation, but nobody is talking about that.

There are also differences between states – California vs. Texas, for example.

Evidence – sea level is rising – 48 mm since 1992 (as measured by satellites). Might be best measure. Glaciers are melting, even as snowfall rises. Snow season is getting shorter – meltoff in Pacific Northwest is 7-10 days earlier now. Risk of drought increases substantially, along with wildfire danger.

Everything that’s going on in climate has a natural variability component and a global warming component.

Modeling the climate system is complex. Need computers that are 10,000 times faster than those we have now to accurately model. Shows a slide that models global temperatures that accounts for the real observations by adding human effects to what would have occurred naturally.

Precipitation patterns change – the wet places get wetter and more intense, the dry places get dryer.

What do we do?

Mitigation, adaptation, or do nothing.

Doing nothing == adaptation without planning.

What you do relates to value systems. That’s where politicians get involved.

The UN Framework Convention on Climate Change (ratified in 1994, including by US). Kyoto Protocol is a legal instrument under that convention. US withdrew in 2001. In 2004 US emissions were 16% over 1990 levels for greenhouse gasses.

What about a carbon tax? If there was a value to CO2 presumably the production of CO2 as waste would be reduced. Cap and trade is a variation – favored by Congress at present (at least partly because it doesn’t have the term “tax” in it). Tracking sources of violators becomes a whole new industry. If countries don’t subscribe it can favor those who pollute.

Coal fired power plants have been brought online at a rate of 2 per week over the past 5 years. China leads with one every 3 days or so.

A freeze on emissions means that conventrations of CO2 continue to increase. We have to adapt to climate change.

Assess vulnerability; devise coping strategies; determine impacts of possible changes : we need information!

We need to observe and track climate changes as they occur; analyze global products with models; understand the changes and their origins; validate and improve models; initialize models and predict futue developments; and assess impacts so as to provide advice.

Weather prediction – a problem of predicting the evolution of the atmosphere for minutes to days to perhaps 2 weeks ahead. Begins with observations of initial state ; atmosphere is a chaotic fluid, small uncertainties or model errors grow rapidly in time and make longer term prediction impossible.

Climate prediction – problem of predicting the patterns or character of weather and the evolution of the entire climate system. Often regarded as a “boundary value” problem This means determining systematic departures from normal from the influences of the climate system and external forcings. The oceans and ice evolve slowly, providing some predictability on mult-year time scales. Because there are many possible weather situations, it is inherently probabalistic.

As time scale is extended, the influence of anomalous boundary forcings grows to become noteworthy. The largest signal is El Nino. involves knowing the state of the ocean. All climate prediction involves initial conditions of the climate system, leading to a seamless (in time) prediction problem. A challenge we’re not capable of meeting at the present.

There have been no revolutionary changes in weather and climate model design since the 1970s. The models are somewhat better. Meanwhile, computing power is up by a factor of a million. That’s gone to increasing resolution of models and longer runs.

[ECAR Summer 2008] Bob Franza

Seattle Science Foundation – to nurture networks of experts. They have an 18k sq. ft. facility in Seattle, but if they buid more bricks in the future they will have failed. Working in virtual environments (second life) – not games. Using the virtual environment to solve real problems of distributed teams, not just as a … Continue reading “[ECAR Summer 2008] Bob Franza”

Seattle Science Foundation – to nurture networks of experts. They have an 18k sq. ft. facility in Seattle, but if they buid more bricks in the future they will have failed. Working in virtual environments (second life) – not games.

Using the virtual environment to solve real problems of distributed teams, not just as a neat technology.

CareCyte – why should workflow and health care be foreign concepts to each other? If you were going to redesign a health service facility, what would you build? Rethought the design – ultra-fast design, manufacture, assembly, near-laminar airflows, all internal walls are furniture that can be reconfigured easily, etc. Rendered the facility in about 2.5 days in second life, and can show people the ideas and design in an engaging way that can’t be accomplished with drawing and static images. Have to not use the technology in ways that replicate existing activities (e.g. giving lectures).

“Recruiters will look at somebody with a World of Warcraft score of 70 or above as CEO material.”

Doesn’t like the term “virtual” – has negative semantics around it, including accountability. Prefers the term “immersive”.

Bob talks about the ability to become things you aren’t in the environment, whether that’s a molecule to better understand how physics work or seeing what it’s like to be in a wheelchair or changing gender.

He’s asked what the implications of immersive environments are for university enterprises.

They’re looking at the undergraduate health sciences curriculum – can’t find anatomy profs anymore. Can’t supply cadavers for education – why do you need them? The curriculum is the same worldwide – you’ve got buildings on campuses with students coming in and getting bored. What does it cost to heat, cool, and illuminate those buildings? We have brought no imagination to these challenges. We have to look at the cost of operations. What is the cost of distributing rolls of toilet paper into thousands of classroom buildings?

All of the retired faculty could be participating in these immersive environments to bring education to many more people.

Macro nodes – very large data centers sitting next to hydroelectic generating stations. Biological scientists haven’t figured this out to the extent that astro and physicists have with things like the Hubble – collaborate to create resource.

Have to stop thinkign about physical space as the basis of anything except for those things that absolutely require it.

We have no technology excuses – the fundamental issue is will. Oil prices will drive that will.

We have to stop asking students to do pattern recognition. The way we’ve been evaluating students doesn’t have anything to do with the challenges they will face. But if they have to get along with a group of others to actually accomplish something, that will translate.

Bob invites people to contact him and work with them in Second Life.

[ECAR Summer 2008] Richard Katz intro

Richard leads off the Summer ECAR Symposium with an introduction. He cites some in progress research that indicates the importance of cyberinfrastructure to the work of higher education and our success in building that infrastructure over the last 25 years. He thinks that our concerns need to shift from engineering to how the network transforms … Continue reading “[ECAR Summer 2008] Richard Katz intro”

Richard leads off the Summer ECAR Symposium with an introduction.

He cites some in progress research that indicates the importance of cyberinfrastructure to the work of higher education and our success in building that infrastructure over the last 25 years. He thinks that our concerns need to shift from engineering to how the network transforms us as individuals and our institutions and societies. He quotes R.D. Laing on change “… we begin to see the present only when it is already disappearing.”

Question: Are we building tomorrow’s technology for yesterday’s world? (or yesterday’s university?)

The mashup is the dominant theme of today – Croatian flight attendants on Ryanair, didgeridoo bands in Geneva.

Higher Ed, c. 2008 – Shift from public good to private investment, rising costs, increasing pressures on revenues, increasing pressures to account for student success and institutional performance, democratization of access to university, increasing competition for talent, funds, incluence, balkanization. Higher ed is of increasing importance to the world economy.

The demographics of higher ed are identical across the developed world.

The Cloud continues to gain in importance and acceptability. How does the university reach out into the cloud to extend its present? How do we look at the cloud as a potential provider of new forms that can either invigorate or threaten our missions?

Trends: shifting balance of power; rising consumerism; the rise of ‘truthiness’; emergence of the collective; technology rolls on.

Student engagement is decreasing, according to several indicators, despite “really neat IT.”

Privatization of knowledge may impede the free flow of information.

Really Neat IT does not equate to great teaching. But great IT has helped great research. We’ll hear more of that in the next couple of days.

Question: what vision, metaphor or vision will define our boundaries and inspire our reach? If we continue adding incrementally we’ll build a highway to a sidetrack. Examples: The school of Athens; the Log College; the Student Free University; Murdoch University; Virtual U.

To fram our next steps: What is the ‘idea’ of the university? What is the institution really trying to do? What does the institution really need to do well to manifest its intent? What are the information infrastructures needed?

[ECAR 2007 Winter] Nicole Ellison – Facebook Use On Campus

Nicole Ellison, from Michigan State University, is talking on Facebook Use on Campus: A Social Capital Perspective on Social Network Sites. Social network sites allow individuals to: construct a profile, articulate a list of other users that they’re connected to, and view and traverse their list of connections and those of others. In Facebook people … Continue reading “[ECAR 2007 Winter] Nicole Ellison – Facebook Use On Campus”

Nicole Ellison, from Michigan State University, is talking on Facebook Use on Campus: A Social Capital Perspective on Social Network Sites.

Social network sites allow individuals to: construct a profile, articulate a list of other users that they’re connected to, and view and traverse their list of connections and those of others.

In Facebook people are primarily articulating an existing offline network, as opposed to trolling for new connections. An estimated 79-95% of all undergrads have Facebook accounts.

Who’s using Facebook? White students more likely (Hispanic students more likely to use MySpace). Students who live at home less likely to use social network sites.

When are students using Facebook? Not substituting for f2f time – use is less during weekends, for example. During summer it’s higher – when they’re not together.

Did a series of surveys of MSU undergrads, interviews and cognitive walk-throughs, and automated capture of web content.

What are students doing on Facebook?

  • Engaging in online self-presentation – going to be an increasingly important skill as digital citizens.
  • Engaging in social behavior: converting latent tiees to weak ties; maintaining existing relationships; resurrecting past relationships
  • Converting latent ties to weak ties – ties that are technically possible but not yet activated socially – e.g. someone who’s in a large lecture class with me but I haven’t spoken to yet. FB makes it easy to find out about these people, through their profile. Hypothesize that having that kind of social info about people lowers barriers to f2f contact. FB enables managing a large network of weak ties.
  • Maintaining relationships – students use FB to remember phone numbers or dorm room numbers (interesting thoughts wrt our directories).
  • Resurrecting past relationships – maintaining contact with high school friends.

Students surveyed said they had an average of over 250 FB friends and around 150 friends at that campus, and about a third of those are actual friends.

Social capital – benefits we reap from our relationships with others. Like other forms of capital it has real value. Bridging social capital is linked to weak ties – provides useful information or new perspectives for one another, but typically not emotional support. Bonding social capital reflects strong ties with family and close friends – support network.

Survey items about FB intensity. Facebook intensity is a good predictor of bridging social capital. Bridging social capital may be especially important in the period of emerging adulthood (18-25). They found that FB helps students with low esteem build bridging social capital more than students with high self esteem. In 2007 students reported 4 hours Internet use a day and 54 minutes a day on FB.

Stanford had a course (CS377W) on Creating Engaging Facebook Apps – two of the top five facebook apps were from this course.

[ECAR 2007 Winter] Robert Kraut – Conversation and Commitment in Online Communities

Robert Kraut is the Herbert A. Simon Professor of Human-Computer Interaction in the Business School at Carnegie Mellon. It’s interesting to study online communities because the interactions are exposed and documented. Defining success: – Success is multidementional: transactional (did your question get answered?, were resources exchanged?); individual (was commitment developed?); and group (did it successfully … Continue reading “[ECAR 2007 Winter] Robert Kraut – Conversation and Commitment in Online Communities”

Robert Kraut is the Herbert A. Simon Professor of Human-Computer Interaction in the Business School at Carnegie Mellon.

It’s interesting to study online communities because the interactions are exposed and documented.

Defining success:

– Success is multidementional: transactional (did your question get answered?, were resources exchanged?); individual (was commitment developed?); and group (did it successfully recruit and retain members, and persist over time?).

Developing Commitment:

Commitment develops ov er time, with early phase especially fragile, and it’s a bi-directional process. There’s a cost-benefit analysis.

It’s rational for groups to be skeptical of newcomers. Newcomers take resources from existing members. The group is more likely to be welcoming if they perceive newcomers as”deserving”. Thesis: individuals may use self-revealing introductions to signal both legitimacy and investment.

He’s looking at research questions about whether groups ignore newcomers and whether conversational strategies encourage group members to pay attention to newcomers. Looked at 99 Usenet groups, around 40k messages. They’ve seen that groups respond less to newcomers across the board, particularly in political and hobby groups. They then used machine learning to analyze messages to find self-introductory messages. Attempt to predict whether a given message will get a reply. Found that newcomers with a self-introduction are treated as well as old-timers without one. They found that messages with a group-oriented introduction (“I’ve been lurking here for a while…”) almost doubled the chance that a message would get a reply.

I wonder how this connects with the public profiles like in Facebook?

When will indivduals “join”?

Individuals evaluate potential benefits from the group. The reactions they get from initial attempts to engage the group will be especially meaningful. Hypothesis is that people will be more likely to continue to participate if people respond to them, and if the reply comes from people with higher status in the group and if they are positive in attitude. Found that only 20% of newcomers who don’t get a reply to their initial message are seen again, while 40% of those that do get a reply are active subsequently. The idea of the “welcoming committee”, like Wikipedia has, is very useful in developing commitment. The more central the replier is to the group, the more powerful it is for developing commitment of the newcomer. The tone of the welcoming language also has an effect.

I asked whether they’ve done any work in looking at the formation of new online communities and what factors might lead to success. It’s hard to research the formation of new communities because it’s hard to catch communities at the moment of formation. The problem with starting new groups is that there’s no content, so no reason for people to go there – a chicken and egg problem. One thing that helps is to find niche markets where people have a very high need for information sharing and will accept relatively low returns as worthwhile. Working with an existing organization might help.

Facebook is a fantastically successful community. Some of its success can be attributed to it having started with a small handful of communities (universities) that provide a pre-existing connection (students at the same institution), and then built on the early success, with the latest example being opening up the API to allow other people to build new services for the community.

In his courses they’ve been using Drupal because it offers lots more flexibility than course management systems. Even then they’ve had a hard time getting students to participate – so they’re learning how to issue challenges, use reputation-building systems, and other techniques to encourage participation. In on-campus communities, the hostility towards newcomers is less of a problem because people already consider themselves part of an existing collective.

[ECAR 2007 Winter] Guy Creese – Should We Work In The Clould?

Guy Creese is an Analyst with the Burton Group, talking about the Pros and Cons of Software as a Service. During 2007 the major players have all jumped into Saas productivity applications. From Burton’s point-of-view the market is immature, but over the next couple of years there will be lots of progress and it will … Continue reading “[ECAR 2007 Winter] Guy Creese – Should We Work In The Clould?”

Guy Creese is an Analyst with the Burton Group, talking about the Pros and Cons of Software as a Service.

During 2007 the major players have all jumped into Saas productivity applications. From Burton’s point-of-view the market is immature, but over the next couple of years there will be lots of progress and it will become a buyers’ market.

Different user types map differently into different parts along the Communication-Collaboration and Asynchronous-Synchronous axes. Some are better at living in the cloud than others, based on roles, generations, and skills.

CFOs love SaaS because they can treat it as an operating expense.

SaaS typically has faster development cycles than packaged software.

In commercial institutions SaaS is more epxensive than packaged software in the long run, though that’s not true with educational discounts.

You can configure SaaS, but not customize it. And it doesn’t usually support offline work. UIs are not as rich as local software, though that’s not quite as true as it used to be before Ajax.

A third party is hosting the content, leading to security and intellectual property concerns.

Customer has no control over product rollouts – clients instantly get what the provider releases.

Records management is difficult and requires extra qork.

Lots of players in the market now:

Adobe – going for platform-neutral collaboration, with flash-based apps. THey offer word processing (Buzzword), web conferencing (Connect), and document sharing (Share).

Cisco – Collaboration is key and will generate demand for network gear, They acquired WebEx primarily for the web conferencing, but got WebEx WebOffice in the bargain, which offers shared calendar, web meetings, email, and database.

Google – SaaS is the wave of the future. Premier/Education edition – 5+ GB mailbox, IM, Collab office apps (docs, spreadsheets, presentations), shared docs.

Microsoft – Software sandbox_comments.diff Sandbox.zip sandpress.zip services. Live@edu suite – 5 GB per mailbox, 500 MB of storage (SkyDrive), IM, Alerts, Collab.

Salesforce.com – SaaS is the wave of the future. Acquired Koral.com and is rolling it into Salesforce. Salesforce Content – Content tagging, automated content recommendations, community feedback and ratings, version control. Initially rolling out for CRM customers, but the company has worked a lot with k-12 schools.

Yahoo – strong email and API offering through the acquisition of zimbra.

Folks seem to accept as valid that SaaS is here to stay. For higher ed, better infrastructure at a lower cost is a big driver. Hosted email is the typical point of entry. Common calendar is next, then document collaboration. User segmentation is key.

Guy had a good list of evaluation questions to ask when evaluating SaaS products.

Clemson offered fac/staff an opt-in to Exchange, then asked students what they wanted, and students told them they wanted Google Apps. This semester: 1848 students opted in, but only 694 forward their clemson.edu mail there. 195 employees opted in to gmail, with 69 forwarding.

Bruce Maas from UW Milwaukee notes that the policy issues are key.