Embedded librarianship in Blackboard: examples

Half of my title is “Distance Services Librarian,” and while I had taken online courses while obtaining my library science degree, I wasn’t sure how to start integrating library resources into online courses, which have grown massively in number here at John Jay. I talked with a lot of librarians at other colleges who worked with online classes, and many said they’d been  embedded librarians.

The literature about embedded librarianship is either about a librarian assigned to an in-person class who shows up in the classroom every week, which is not what we’re talking about and also sounds v. exhausting, or about a librarian who visits a Blackboard course and posts content. Looking into the latter, there are many articles about the topic, but not a lot of actual examples. So here are some from my own experience.

Workflow of our embedded librarian program

  1. Instructors request a librarian to be enrolled in their online-only course for a week. Librarians arrange who’s going to take on the course.
  2. The librarian and instructor discuss which needs should be addressed. The librarian runs tentative curriculum (bulleted list of items they’ll post) by the instructor, just to make sure all objectives are hit.
  3. The Blackboard admin enrolls the librarian in the course with the instructor’s permission. On our campus, there’s a dedicated Librarian role in Blackboard, which has all the power of an instructor role except accessing the grade center.
  4. The librarian posts a folder of content early on Monday or the Friday before. See below for examples.
  5. During the week, the librarian answers questions in a dedicated discussion forum. This often reaches into the weekend, with several questions coming in on Sunday night, so the librarian should set expectations, e.g., “will respond to your questions within one business day.”
  6. The Blackboard admin un-enrolls the librarian.

Examples of embedded librarianship in Blackboard

These are screenshots from courses (edited to anonymize everything but me).

Example of content posted in a Blackboard course by a librarian: tutorial video, recommended databases, animated gif about keywords, citation information
Example from a lit class. Click to view larger

Read more

Invisible spam pages on our website: how we locked out a hacker

TL;DR: A hacker uploaded a fake JPG file containing PHP code that generated “invisible” spam blog posts on our website. To avoid this happening to you, block inactive accounts in Drupal and monitor Google Search Console reports.

I noticed something odd on the library website the other day: a search of our site displayed a ton of spam in the Google Custom Search Engine (CSE) results.

google CSE spam

But when I clicked on the links for those supposed blog posts, I’d get a 404 Page Not Found error. It was like these spammy blog posts didn’t seem to exist except for in search results. I thought this was some kind of fake-URL generation visible just in the CSE (similar to fake referral URLs in Analytics), but regular Google was seeing these spammy blog posts as being on our site as well if I searched for an exact title.

spam results on google after searching for exact spam title

Still, Google was “seeing” these blog posts that kept netting 404 errors. I looked at the cached page, however, and saw that Google had indexed what looked like an actual page on our site, complete with the menu options.

cached page displaying spam text next to actual site text

Cloaked URLs

Not knowing much more, I had to assume that there were two versions of these spam blog posts: the ones humans saw when they clicked on a link, and the ones that Google saw when its bots indexed the page. After some light research, I found that this is called “cloaking.” Google does not like this, and I eventually received an email from Webmaster Tools with the subject “Hacked content detected.”

It was at this point that we alerted the IT department at our college to let them know there was a problem and that we were working on it (we run our own servers).

Finding the point of entry

Now I had to figure out if there was actually content being injected into our site. Nothing about the website looked different, and Drupal did not list any new pages, but someone was posting invisible content, purely to show up in Google’s search results and build some kind of network of spam content. Another suspicious thing: these URLs contained /blogs/, but our actual blog posts have URLs with /blog/, suggesting spoofed content. In Drupal, I looked at all the reports and logs I could find. Under the People menu, I noticed that 1 week ago, someone had signed into the site with a username for a former consultant who hadn’t worked on the site in two years.

Inactive account had signed in 1 week, 4 days ago

Yikes. So it looks like someone had hacked into an old, inactive admin account. I emailed our consultant and asked if they’d happened to sign in, and they replied Nope, and added that they didn’t even like Nikes. Hmm.

So I blocked that account, as well as accounts that hadn’t been used within the past year. I also reset everyone’s passwords and recommended they follow my tips for building a memorable and hard-to-hack password.

Clues from Google Search Console

The spammy content was still online. Just as I was investigating the problem, I got this mysterious message in my inbox from Google Search Console (SC). Background: In SC, site owners can set preferences for how their site appears in Google search results and track things like how many other websites like to their website. There’s no ability to change the content; it’s mostly a monitoring site.

reconsideration request from google

I didn’t write that reconsideration request. Neither did our webmaster, Mandy, or anybody who would have access to the Search Console. Lo and behold, the hacker had claimed site ownership in the Search Console:

madlife520 is listed as a site owner in google search console

Now our hacker had a name: Madlife520. (Cool username, bro!) And they’d signed up for SC, probably because they wanted stats for how well their spam posts were doing and to reassure Google that the content was legit.

But Search Console wouldn’t let me un-verify Madlife520 as a site owner. To be a verified site owner, you can upload a special HTML file they provide to your website, with the idea that only a true site owner would be able to do that.

google alert: cannot un-verify as HTML file is still there. FTP client window: HTML file is NOT there.

But here’s where I felt truly crazy. Google said Madlife520’s verification file was still online. But we couldn’t find it! The only verification file was mine (ending in c12.html, not fd1.html). Another invisible file. What was going on? Why couldn’t we see what Google could see?

Finding malicious code

Geng, our whipsmart systems manager, did a full-text search of the files on our server and found the text string google4a4…fd1.html in the contents of a JPG file in …/private/default_images/. Yep, not the actual HTML file itself, but a line in a JPG file. Files in /private/ are usually images uploaded to our slideshow or syllabi that professors send through our schedule-a-class webform — files submitted through Drupal, not uploaded directly to the server.

So it looks like this: Madlife520 had logged into Drupal with an inactive account and uploaded a text file with a .JPG extension to a module or form (not sure where yet). This text file contained PHP code that dictated that if Google or other search engines asked for the URL of these spam blog posts, the site would serve up spammy content from another website; if a person clicked on that URL, it would display a 404 Page Not Found page. Moreover, this PHP code spoofed the Google Search Console verification file, making Google think it was there when it actually wasn’t. All of this was done very subtly — aside from weird search results, nothing on the site looked or felt differently, probably in the hope that we wouldn’t notice anything unusual so the spam could stay up for as long as possible.

Steps taken to lock out the hacker

Geng saved a local file of the PHP code, then deleted it from the server. He also made the subdirectory they were in read-only. Mandy, our webmaster, installed the Honeypot module in Drupal, which adds an invisible “URL: ___” field to all webforms that bots will keep trying to fill without ever successfully logging in or submitting a form, in case that might prevent password-cracking software. On my end, I blocked all inactive Drupal accounts, reset all passwords, unverified Madlife520 from Search Console, and blocked IPs that had attempted to access our site a suspiciously high number of times (these IPs were all in a block located in the Netherlands, oddly).

At this point, Google is still suspicious of our site:

"This site may be hacked" warning beneath Lloyd Sealy Library search result

But I submitted a Reconsideration Request through Search Console — this time, actually written by me.

And it seems that the spammy content is no longer accessible, and we’re seeing far fewer link clicks on our website than before these actions.

marked increase, then decrease in clicked links to our site

I’m happy that we were able to curb the spam and (we hope) lock out the hacker in just over a week, all during winter break when our legitimate traffic is low. We’re continuing to monitor all the pulse points of our site, since we don’t know for sure there isn’t other malicious code somewhere.

I posted this in case someone, somewhere, is in their office on a Friday at 5pm, frantically googling invisible posts drupal spam urls 404??? like I was. If you are, good luck!

Heads Up! in PowerPoint for library class sessions

Since my John Jay colleague Kathleen Collins wrote about using active learning strategies in library “one-shot” sessions, I’ve been experimenting with games and hands-on activities to keep students engaged in the material. Typically, I cover library research basics in the sessions I teach: breaking a research question down into keywords (this is hard for freshmen!) and finding books/articles.

I frequently refer to “Don’t Do Their Work: Active Learning and Database Instruction,” a fantastic article in LOEX by Jennifer Sterling, which covers different active-learning activities she uses in her classroom. One in particular has been a breakout success for my own teaching.

Heads Up! is an iOS/Android app from Ellen DeGeneres (et al.) based on the old game Password, wherein the player who’s “it” must guess a word they can’t see based on hints from their teammates. It’s a great way to get students thinking about synonyms and related words for keywords, and it absolutely starts the class session off with a high energy level.

Because this is happening in the library classroom, I have adapted Heads Up! for a PowerPoint presentation. It’s a little hokey — it’s just a list of words that appear on-click next to a one-minute timer gif. A volunteer from each side of the room stands in front of the projector screen so they can’t see the words, but their teammates can.

example from powerpoint slides

Download my PowerPoint slides for adapted Heads Up! (adapt further and reuse freely) »

Usually, students get between 2 and 7 words. Note that these are general words, not library-y words. Something easy and low-barrier to engage students from the get-go. So far, my favorite moment has been for the keyword “Chiptole,” for which half the classroom devolved into students shouting “Bowl! Bowl! BOWL! BOWL!” at their flustered classmate. (“Cereal? Spoon? Plate? Salad?? Soup??”) Probably the most laughter that’s ever occurred on my watch.

I swear by this activity! Students usually beg me to let them play another round, which wouldn’t hurt since it’s only 60 seconds. They absolutely get the connection between Heads Up! and the next part of my presentation, in which they pick keywords out of their actual research questions and find synonyms and related words, then trade worksheets with a classmate. This, too, is an activity inspired by that LOEX article.

Screen Shot 2015-11-17 at 3.01.56 PM

Download “Keywords” Word Document (adapt and reuse freely) »

Let me know if you use these or other active-learning approaches in your library classes. I’m always looking for fun ways to engage undergrads in the library curriculum.

What did I do this year? 2014–15 edition

librarian word cloud

I jump-start my annual self-evaluation process with a low-level text analysis of my work log, essentially composed of “done” and “to do” bullet points. I normalized the text (e.g. emailed to email), removed personal names, and ran the all “done” items through Wordle.

2014–15 was my third year in my job and the third time I did this. (See 2012–13 and 2013–14). I do this because it can be difficult to remember what I was up to many months ago. It’s also a basic visualization of where my time is spent.

What did I do at my job this year?

Aside from the usual meetings, emails, and Reference Desk duties…

  • chat: I implemented a chat reference service with my colleagues (this had been tried before on this campus, but with subpar software and bad staffing experiences; this time, we have limited hours and are very happy with LibraryH3lp)
  • 50th: I worked on a physical and digital exhibit on the 50th anniversary of John Jay
  • mmc: We rolled out the Murder Mystery Challenge for the second year
  • l-etc: I co-chaired the LACUNY Emerging Tech Committee for the second year
  • dc: I worked more on our Digital Collections site, importing materials and refining the UX
  • mla: I went to MLA 2015 in Vancouver and gave a presentation
  • onesearch: We further implemented CUNY’s web-scale discovery service; I organized and ran a usability testing session with my colleagues
  • caug: I began to convene the CollectiveAccess User Group at METRO
  • socialmedia: I became more active on behalf of the library on the @johnjaylibrary Instagram account
  • newsletter: I designed two more biannual issues of Classified Information, our department newsletter
  • drupal, page, fixed, update, added, etc.: I continued to maintain the library’s Drupal-based website

What’s on tap for 2015–16? Lots of online education outreach and much more instruction than I’ve previously done! I’m also starting to flex my writing muscles, starting with a quarterly column in Behavioral & Social Sciences Librarian.

CollectiveAccess work environment

I wrote earlier about our CollectiveAccess workflow for uploading objects one-by-one and in a batch. Now I’ll share our CollectiveAccess work environment. We use two Ubuntu servers, development (test) and production (live), both with CollectiveAccess installed on them. We also use a private GitHub repository.

This is only one example of a CollectiveAccess workflow! See the user-created documentation for more.

Any changes to code (usually tweaking the layout of front end, Pawtucket) are made first on the dev instance. Once we’re happy with the changes and have tested out the site in different browsers, we commit & push the code to our private GitHub repo. Then we pull it down to our production server, where the changes are now publicly viewable.

Any changes to objects (uploading or updating objects, collections, etc.) are made directly in the production instance. We never touch the database directly, only through the admin dashboard (Providence). These data changes aren’t done in the dev instance; we only have ~300 objects in the dev server, as more would take up too much room, and there’s no real reason why we should have all our objects on the dev instance. But if there’s a new filetype we’re uploading for the first time, or another reason an object might be funky, we add the object as a test object to the dev server.

Any changes to metadata display (adding a new field in the records) is done through the admin dashboard. I might first try the change on the dev instance, but not necessarily.

Pros:

  • code changes aren’t live immediately and there is a structure for testing
  • all code changes can be reverted if they break the site
  • code change documentation is built into the workflow (git)
  • objects and metadata are immediately visible to the public, and faculty/staff working on the collections don’t need to know anything about git

Cons:

  • increasing mismatch between the dev and production instances’ objects and metadata display (in the future, we might do a batch import/upload if we need to)
  • this workflow has no contact with the CollectiveAccess GitHub, so updates aren’t simply pulled, but rather manually downloaded; new files overwrite old files

Not pictured or mentioned above: our servers are backed up on a regular basis.

CollectiveAccess super user? Add your workflow to the Sample Workflows page!