I chaired the Business Intelligence lightning round at Educause. We had three varied and interesting presentations from the University of New Mexico, University of Oklahoma and Princeton. Three challenges surfaced from all three of the presentations.
Firstly the importance of the data itself. Two of the presentations reported that they had spent some considerable effort in cleansing data. This is a major exercise and should not be underestimated. One presenter commented that, with regard to their data cleansing exercise, they were five years into an 18 moth project. Cleansing data is perhaps like shutting the stable door after the horse has bolted – it is better to ensure that the data is accurate in the first place and the importance of training staff was stressed. In the UK we had a presentation on data cleansing at the UCISA CISG conference from the University of Exeter. Their project, in common with the ones we heard about today, had two strands – correction and prevention. A brief summary of that presentation is available in an earlier blog posting of mine.
Getting accurate data is one point but the need for those using the data or reporting on it to understand it is another. I introduced a reporting tool into the HR department in the institution I was working in twelve years ago and I could almost guarantee that I could get a number of different answers even to a simple question such as how many staff does the university have? There needs to be a consistent approach and staff need to understand a little about the data (and perhaps the consequences of getting it wrong) before they report on it. It is an issue that we are looking at in the UK and HESA, the statistics agency, is investing some time in educating those in institutions as to the consequence of getting your data wrong.
Finally, when you get good aggregated data from a series of validated sources it can be put to a variety of uses. One such use is to identify students that are at risk of failing or who are otherwise struggling with university life. The University of Oklahoma were using their portal to detect potentially failing students. In the UK a number of institutions have been taking data from a wide range of systems to get early warning signs of problems so that they can be addressed and student retention improved. This can involve checking the library system for evidence that a student has used the facilities including checking the security system to see whether the student has even been to the library, checking access to the VLE or other computer systems, etc. A lack of evidence of engagement with the institution in all or some of these areas would be taken as a strong indication of a potential student drop out.
The value of business intelligence was highlighted in the three presentations. What was also highlighted was that it is not a trivial problem – as Ted Bross from Princeton pointed out an enterprise BI project should be treated in the same way as an ERP implementation. Good business intelligence needs to be well planned and well resourced.