https://www.blogger.com/comment.g?blogID=1306183455487090817&postID=4354036600189758624&page=1
https://www.blogger.com/comment.g?blogID=4349363433820294527&postID=4640727964295703840&page=1
Monday, December 8, 2008
Sunday, November 30, 2008
Monday, November 24, 2008
Friday, November 21, 2008
Week 11 muddy point
This is regarding the DiLight hand-on-point. I registered and everything and was looking around and a ton of the links didn't work - I kept getting an 'internal server error'. Also, a few links from the slides were outdated and didn't work. For an IT class, this isn't a good thing.
Thursday, November 20, 2008
Week 12 Readings
Weblogs: Their use and application in science and technology libraries
This article provided an interesting history of blogs (in all honesty, I hate that word) and find it very neat that they sprung from sites that summarized new/interesting websites way back in 1996, when the Internet was much smaller obviously. I wonder if things like Digg are similar to this now or are there other sites that still do this? I really do not know but I would think it would be next to impossible because of the sheer size of the Internet.
I like the whole idea of 'co-evolution' instead of control when it comes to interaction between people on blogs. I can only assume that's a main draw of them and I am starting to see how they could really help for members of research projects, especially in medicine and other fields where timeliness of publication, etc. is very important.
I didn't really understand the whole 'having to go to the website' criticism of blogs. This is the only criticism metioned in the article and it is laughably minor. I wonder if other criticisms exist.
It is going to be more and more important for librarians of all types to know how to create and use blogs effectively. However, creating them is the easy part. Finding the time to keep them updated, etc. isn't so easy.
Using a wiki to manage a library instruction program...
This article provides a good definition and various uses of wikis in a very specific setting. The main idea I got from the article is that knowledge should be shared, coordinated and built upon and that's what wikis do. In the long run, I wonder what effect wikis and blogs will have on advances in science and technology.
Creating the academic library folksonomy...
Trying to 'catalog the Internet' seems like an impossible task but through social tagging of items from a variety of web sources (licensed databases, etc), it is a good replacement for the idea of cataloging. The article is right in that there is a lot of 'gray literature' out there. Right now at work, we have a subscription to the New York Academy of Medicine Gray Literature Report which arrives bimonthly to my email address and provides links to hundreds of gray articles, reports, etc. Here's a link to the most current report
This report helps but how do we really know what else is out there that would benefit the research being done in my department? Hopefully because of things like Zotera and citeulike which are somewhat easy to use and very helpful, more and more people will get turned onto tagging.
Wikipedia video
It's interesting that this class has been the only one in my MLIS program that had readings from Wikipedia. Most other professors forbid the use of it quite adamantly. Because of this, it was just beat into my head that Wikipedia was useless for research purposes and information found on it cannot be trusted. My opinion has changed from watching the video. Wales said that 'everyone should have free access to the sum of human knowledge' and that is quite a great goal to have. I did not realize that changes, etc. were done by a close-knit community who take their jobs very seriously and he also points out that people who write an encyclopedia for FUN tend to be smart, - I don't think anyone could disagree with that!
The fact that they handle controversial articles well, are quick to take action when someone does something bad (i.e. the skinhead example) and claim to not have the built-in biases that other encyclopedias and textbooks have makes me trust it a lot more. Verifying a wikipedia entry's references will always be necessary (and just a good practice) but at least I feel like I can use it with more confidence now.
This article provided an interesting history of blogs (in all honesty, I hate that word) and find it very neat that they sprung from sites that summarized new/interesting websites way back in 1996, when the Internet was much smaller obviously. I wonder if things like Digg are similar to this now or are there other sites that still do this? I really do not know but I would think it would be next to impossible because of the sheer size of the Internet.
I like the whole idea of 'co-evolution' instead of control when it comes to interaction between people on blogs. I can only assume that's a main draw of them and I am starting to see how they could really help for members of research projects, especially in medicine and other fields where timeliness of publication, etc. is very important.
I didn't really understand the whole 'having to go to the website' criticism of blogs. This is the only criticism metioned in the article and it is laughably minor. I wonder if other criticisms exist.
It is going to be more and more important for librarians of all types to know how to create and use blogs effectively. However, creating them is the easy part. Finding the time to keep them updated, etc. isn't so easy.
Using a wiki to manage a library instruction program...
This article provides a good definition and various uses of wikis in a very specific setting. The main idea I got from the article is that knowledge should be shared, coordinated and built upon and that's what wikis do. In the long run, I wonder what effect wikis and blogs will have on advances in science and technology.
Creating the academic library folksonomy...
Trying to 'catalog the Internet' seems like an impossible task but through social tagging of items from a variety of web sources (licensed databases, etc), it is a good replacement for the idea of cataloging. The article is right in that there is a lot of 'gray literature' out there. Right now at work, we have a subscription to the New York Academy of Medicine Gray Literature Report which arrives bimonthly to my email address and provides links to hundreds of gray articles, reports, etc. Here's a link to the most current report
This report helps but how do we really know what else is out there that would benefit the research being done in my department? Hopefully because of things like Zotera and citeulike which are somewhat easy to use and very helpful, more and more people will get turned onto tagging.
Wikipedia video
It's interesting that this class has been the only one in my MLIS program that had readings from Wikipedia. Most other professors forbid the use of it quite adamantly. Because of this, it was just beat into my head that Wikipedia was useless for research purposes and information found on it cannot be trusted. My opinion has changed from watching the video. Wales said that 'everyone should have free access to the sum of human knowledge' and that is quite a great goal to have. I did not realize that changes, etc. were done by a close-knit community who take their jobs very seriously and he also points out that people who write an encyclopedia for FUN tend to be smart, - I don't think anyone could disagree with that!
The fact that they handle controversial articles well, are quick to take action when someone does something bad (i.e. the skinhead example) and claim to not have the built-in biases that other encyclopedias and textbooks have makes me trust it a lot more. Verifying a wikipedia entry's references will always be necessary (and just a good practice) but at least I feel like I can use it with more confidence now.
Monday, November 17, 2008
Friday, November 14, 2008
Week 11 comments
Digital Libraries: Challenges and Influential Work
This article describes discusses the the powerful tools we have to access resources and the changes that are happening to make access more efficient. It is a good history lesson in how digital libraries really came about and how federal funds played a big part in what we have to work with now and what we will have in the future. I think it was very forward-thinking (which I don't normally say about the government) of the federal government to work with the NSF and NASA to fund DLI. A lot of technologies that complement the activities funded by the DLI were not federally funded too.
Federated searching seems to be an important issue that still needs to be addressed now.
One thing I was curious about in the article was how it discussed metadata searching v. full-text searching. I wonder if Google is doing or looking into being able to conduct both types of searching.
The last issue the article mentions are library portals and how the NISO Metasearch Initiative is trying to develop standards for libraries to have one-search access to multiple resources through an easy Google-type page. I sometimes have difficulty searching, for instance, the CLP site and think sometimes that it's too busy and takes too many clicks to actually get to what is needed. Having an easier way to search all of the data bases at one time would be a very good thing, especially since a lot of people are used to this type of searching/retrieving.
Dewey Meets Turing
This article sort of pits computer scientists against librarians and then resolves the issues each discipline faces in working together in the world of the DLI. It appears that both sides want to hold on to their traditional roles and still be able to move forward together and by the end of the article, it is clear that this is possible and is currently happening. Librarians of the future will be working even more closely with computer scientists in the emergent institutional repository realm, for instance. All librarians will have to be more forward-thinking and proactive to help find solutions to some problems that still remain and to know that they still have a very viable and important job to do, just like the computer scientists.
Institutional Repositories
Lynch wrote a very interesting article that stemmed from a talk he gave at a workshop on insitutional repositories and their role in scholarship. It does seem like institutional repositories, if handled properly, could really increase collaboration between different universities, especially when it comes to data sharing, etc. Right now, these collaborations can be very costly and unweildly, especially in medicine, where data bases, etc. have to be run and funded (sometimes at very high cost) through the grant. The money that could be saved here could be used for more actual research.
Another interesting thing about the article was the discussion of the increase in traditional journal articles having supplementary materials published online. This is both a blessing and a curse. The New England Journal of Medicine, for instance, has been doing this for a while now and has slowly gotten better with: 1) actually making it very clear in the article that there is some supplementary material available; and 2) being able to find and access this information well after the publication date. It appears that the supplementary material will be forever linked with the actual article. However, when one downloads the PDF of the article itself, the supplementary information is not there. A better system would have it all in the same file and then with the option for the user to print/save the supplementary material. Right now, the system is still a bit burdensome. Perhaps a better system would be to have all of this information in the author's institutional repository but then the question arises, would outsiders have access to it? Would the journal subscribers?
Just like Lynch mentions, institutional repositories have to be set up so as to further and enhance scholarly work, not make it more burdensome.
This article describes discusses the the powerful tools we have to access resources and the changes that are happening to make access more efficient. It is a good history lesson in how digital libraries really came about and how federal funds played a big part in what we have to work with now and what we will have in the future. I think it was very forward-thinking (which I don't normally say about the government) of the federal government to work with the NSF and NASA to fund DLI. A lot of technologies that complement the activities funded by the DLI were not federally funded too.
Federated searching seems to be an important issue that still needs to be addressed now.
One thing I was curious about in the article was how it discussed metadata searching v. full-text searching. I wonder if Google is doing or looking into being able to conduct both types of searching.
The last issue the article mentions are library portals and how the NISO Metasearch Initiative is trying to develop standards for libraries to have one-search access to multiple resources through an easy Google-type page. I sometimes have difficulty searching, for instance, the CLP site and think sometimes that it's too busy and takes too many clicks to actually get to what is needed. Having an easier way to search all of the data bases at one time would be a very good thing, especially since a lot of people are used to this type of searching/retrieving.
Dewey Meets Turing
This article sort of pits computer scientists against librarians and then resolves the issues each discipline faces in working together in the world of the DLI. It appears that both sides want to hold on to their traditional roles and still be able to move forward together and by the end of the article, it is clear that this is possible and is currently happening. Librarians of the future will be working even more closely with computer scientists in the emergent institutional repository realm, for instance. All librarians will have to be more forward-thinking and proactive to help find solutions to some problems that still remain and to know that they still have a very viable and important job to do, just like the computer scientists.
Institutional Repositories
Lynch wrote a very interesting article that stemmed from a talk he gave at a workshop on insitutional repositories and their role in scholarship. It does seem like institutional repositories, if handled properly, could really increase collaboration between different universities, especially when it comes to data sharing, etc. Right now, these collaborations can be very costly and unweildly, especially in medicine, where data bases, etc. have to be run and funded (sometimes at very high cost) through the grant. The money that could be saved here could be used for more actual research.
Another interesting thing about the article was the discussion of the increase in traditional journal articles having supplementary materials published online. This is both a blessing and a curse. The New England Journal of Medicine, for instance, has been doing this for a while now and has slowly gotten better with: 1) actually making it very clear in the article that there is some supplementary material available; and 2) being able to find and access this information well after the publication date. It appears that the supplementary material will be forever linked with the actual article. However, when one downloads the PDF of the article itself, the supplementary information is not there. A better system would have it all in the same file and then with the option for the user to print/save the supplementary material. Right now, the system is still a bit burdensome. Perhaps a better system would be to have all of this information in the author's institutional repository but then the question arises, would outsiders have access to it? Would the journal subscribers?
Just like Lynch mentions, institutional repositories have to be set up so as to further and enhance scholarly work, not make it more burdensome.
Thursday, November 13, 2008
Week 10 muddy point
With regards to link analysis and page ranking, I was wondering about Google. I seem to recall that people can pay to have their websites ranked higher in the Google results pages and within the past 2-3 or so years, the sites at the top of a Google results page are commercial sites. I guess I'm confused because doesn't this sort of defeat the purpose of link analysis if one can just pay for a higher ranking even though the site isn't good (low back link amount, for instance)?
Monday, November 10, 2008
Friday, November 7, 2008
Thursday, November 6, 2008
Week 10 Readings
Web Search Engines Parts 1 & 2
These two articles were very helpful in defining a lot of terms and explaining how search engines are able to provide high quality answers to queries. Explanation of data center set up with clusters and how they make it possible for distributing the load of answering 2000+ queries/second was very interesting. The discussion of the amount of web data out there that search engines crawl through is staggering and it makes sense that it is carried out using many, many machines specifically with this purpose. The 'politeness delay' was also interesting to learn about and makes a lot of sense to have them in the crawler algorithms.
The second part explained certain indexing tricks that make it easier to process phrases better and increase the quality of results. These are all things I take for granted when searching but it's great to know that there's this massive infrastructure working away when I Google, for instance, "cat paw growth" like I just did today! As always though, the Internet will never replace a vet's examination!
OAI Protocol for Metadata Harvesting article
This article provides a brief description of Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) and requires a 'relatively high level of familiarity with how the protocol works...' Even though I do not have familiarity with OAI, the article does describe the 2 parts of OAI (data providers and harvesters) and that the protocol provides access to parts of the "invisible web". Since different communities use this protocol to meet their different needs, obviously it has to be nonspecific. The article lists future enhancements and future directions. Since the article was written, I'm curious if whether or not these enhancements have been made. The most intesting future direction they mention is the importance of using a controlled vocabulary. They mention that the normalization of numerous different controlled vocabularies used by different data providers is 'prohibitively resource intensive but they do mention that in the future 'authority agencies' could use their thesauri to be able to access items in the repository. This probably should be a high priority because there's really no point in having all of this data in one place but not be able to find what you need easily.
Deep Web White Paper
This paper from 2001 describes the Deep Web and is the first known to actually study the Deep Web in quantifiable terms. The Deep Web is the large part of the web that is not searchable by using typical search engines like Google. As of their data in 2001, only .03% of web pages are searched when using search engines. This number has probably gone up since search engines probably have better crawlers now but by how much?
It is curious that Deep Web sites, at least in 2001, get far more traffic than surface web sites. At first I found this surprising since search engines do not typically turn up Deep Web sites based on a searcher's query and, hence, I thought they would receive far less traffic. I guess it is safe to assume that they get more hits because they are used by a specific group of people who do not get to them by using search engines. This is also surprising considering that the subject area of Deep Web sites doesn't appear to be very different than the surface web (Table 6). The fact that the Deep Web has more quality results than surface web is somewhat worrisome and I wonder if the amount of Deep Web pages have decreased because they are able to be crawled now as opposed to 7 years ago because of Google, Yahoo, etc. coming up with better algorithms for their crawlers. As information seekers, we obviously have to take care and figure out better ways to find relevant information in the Deep Web.
These two articles were very helpful in defining a lot of terms and explaining how search engines are able to provide high quality answers to queries. Explanation of data center set up with clusters and how they make it possible for distributing the load of answering 2000+ queries/second was very interesting. The discussion of the amount of web data out there that search engines crawl through is staggering and it makes sense that it is carried out using many, many machines specifically with this purpose. The 'politeness delay' was also interesting to learn about and makes a lot of sense to have them in the crawler algorithms.
The second part explained certain indexing tricks that make it easier to process phrases better and increase the quality of results. These are all things I take for granted when searching but it's great to know that there's this massive infrastructure working away when I Google, for instance, "cat paw growth" like I just did today! As always though, the Internet will never replace a vet's examination!
OAI Protocol for Metadata Harvesting article
This article provides a brief description of Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) and requires a 'relatively high level of familiarity with how the protocol works...' Even though I do not have familiarity with OAI, the article does describe the 2 parts of OAI (data providers and harvesters) and that the protocol provides access to parts of the "invisible web". Since different communities use this protocol to meet their different needs, obviously it has to be nonspecific. The article lists future enhancements and future directions. Since the article was written, I'm curious if whether or not these enhancements have been made. The most intesting future direction they mention is the importance of using a controlled vocabulary. They mention that the normalization of numerous different controlled vocabularies used by different data providers is 'prohibitively resource intensive but they do mention that in the future 'authority agencies' could use their thesauri to be able to access items in the repository. This probably should be a high priority because there's really no point in having all of this data in one place but not be able to find what you need easily.
Deep Web White Paper
This paper from 2001 describes the Deep Web and is the first known to actually study the Deep Web in quantifiable terms. The Deep Web is the large part of the web that is not searchable by using typical search engines like Google. As of their data in 2001, only .03% of web pages are searched when using search engines. This number has probably gone up since search engines probably have better crawlers now but by how much?
It is curious that Deep Web sites, at least in 2001, get far more traffic than surface web sites. At first I found this surprising since search engines do not typically turn up Deep Web sites based on a searcher's query and, hence, I thought they would receive far less traffic. I guess it is safe to assume that they get more hits because they are used by a specific group of people who do not get to them by using search engines. This is also surprising considering that the subject area of Deep Web sites doesn't appear to be very different than the surface web (Table 6). The fact that the Deep Web has more quality results than surface web is somewhat worrisome and I wonder if the amount of Deep Web pages have decreased because they are able to be crawled now as opposed to 7 years ago because of Google, Yahoo, etc. coming up with better algorithms for their crawlers. As information seekers, we obviously have to take care and figure out better ways to find relevant information in the Deep Web.
Saturday, October 25, 2008
Week 9 Comments
https://www.blogger.com/comment.g?blogID=1475137707322366107&postID=351362268738673599&page=1
https://www.blogger.com/comment.g?blogID=5152184136838295923&postID=2734161564624192205&page=1
https://www.blogger.com/comment.g?blogID=3413864360557025238&postID=5977603967671795216&page=1
https://www.blogger.com/comment.g?blogID=5152184136838295923&postID=2734161564624192205&page=1
https://www.blogger.com/comment.g?blogID=3413864360557025238&postID=5977603967671795216&page=1
Thursday, October 23, 2008
Week 9 Readings
Maybe it's because I'm not well-versed in HTML yet but all of the XML readings and the tutorial were very hard for me to understand. I'm hoping that by reading my classmates blogs on these readings, it will help but at this point I have a feeling it won't really come together for me until next week's lecture.
I do have to say that the beginning of the Bergholz article, the figures showing the differences between XML and HTML were helpful but then I got lost again.
I do have to say that the beginning of the Bergholz article, the figures showing the differences between XML and HTML were helpful but then I got lost again.
Week 8 Muddiest Point
I'm not sure about the FTP part of Assignment 6 or what program to use to get our websites onto the Pitt server. After some Googling, I found filezilla which I think I may be able to use but I'm not sure yet.
Monday, October 13, 2008
Friday, October 10, 2008
Week 8 Readings
For someone who has never really used HTML, this tutorial was great. It started out with a basic explanation of what the different parts of the code mean (start tag, end tag, element content, etc) and also how it is important to use lowercase tags. It also made it clear that the most important tags define headings, paragraphs and line breaks and to always be aware of these and not to skip end tags. The numerous examples to play with were very helpful, along with the 'useful tips' located throughout the tutorial. The one thing I noticed is that it is VERY important to spell correctly in the code. That's what I kept screwing up the most but am confident now that I can do some basic HTML coding. The Cheatsheet provided and the Quick List provided by the tutorial will be good reference sheets to use in the future.
The CSS tutorial was a bit tougher to get my head around but I figure the better I get at HTML, the more the CSS stuff will become easier.
The CMS article was interesting in that it explained clearly why consistency is so important in CMS and how the library guides work better when there are standards for content, layout, etc. With standards in place, librarians can now focus more on content, which I would think they prefer. The article explained all the different steps involved in going from FrontPage to the new CMS and plans for the future.
The CSS tutorial was a bit tougher to get my head around but I figure the better I get at HTML, the more the CSS stuff will become easier.
The CMS article was interesting in that it explained clearly why consistency is so important in CMS and how the library guides work better when there are standards for content, layout, etc. With standards in place, librarians can now focus more on content, which I would think they prefer. The article explained all the different steps involved in going from FrontPage to the new CMS and plans for the future.
Thursday, October 9, 2008
Week 7 Muddiest Point
I wonder what the general reaction is to PURL. It seems that people would tend to shy away from using a central agency for people to access their webpages through. Is there concern about the possibility of OCLC getting hacked? Dr. He brought up that PURL would be around as long as OCLC is so it struck me that maybe this isn't the best way to fix the problem of urls.
Wednesday, October 8, 2008
Assignment 5
Here's the link to my Koha Virtual Shelf (a reference collection I wish I owned and perfect for the Halloween season):
http://pitt5.opacwc.liblime.com/cgi-bin/koha/opac-shelves.pl?viewshelf=4
http://pitt5.opacwc.liblime.com/cgi-bin/koha/opac-shelves.pl?viewshelf=4
Monday, October 6, 2008
Assignment 4
Part 1
http://www.screencast.com/users/MonicaLove/folders/Jing/media/91f4c854-1c36-4498-93d4-5ecd90660577
Part 2
http://www.flickr.com/photos/83679611@N00/2902571992/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901762623/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901801861/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901808761/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901823169/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901834155/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901843791/in/set-72157607480045865/
http://www.screencast.com/users/MonicaLove/folders/Jing/media/91f4c854-1c36-4498-93d4-5ecd90660577
Part 2
http://www.flickr.com/photos/83679611@N00/2902571992/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901762623/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901801861/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901808761/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901823169/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901834155/in/set-72157607480045865/
http://www.flickr.com/photos/83679611@N00/2901843791/in/set-72157607480045865/
Tuesday, September 30, 2008
Week 6 Muddiest Point
This may be a rudimentary muddy point but I'm just curious as to what the traceroute information generated from that website would be useful for. I used CMU's link from www.traceroute.com and am curious about what the list means. the IP address I used is from my laptop that I was using on my home wireless network.
Here's what I got from CMU:
Timed out while processing: /usr/sbin/traceroute 136.142.64.121.
It appears the route goes from CMU to Pitt and then eventually to me (unfortunately, it kept timing out). Neat stuff!
Here's what I got from CMU:
traceroute Results
Results from: /usr/sbin/traceroute 136.142.64.121:traceroute to 136.142.64.121 (136.142.64.121), 30 hops max, 38 byte packetsError!
1 POD-C-NH-VL4.GW.CMU.NET (128.2.4.44) 0.247 ms 0.214 ms 0.204 ms
2 CORE255-VL908.GW.CMU.NET (128.2.255.194) 0.317 ms 0.285 ms 0.272 ms
3 POD-I-CYH-VL987.GW.CMU.NET (128.2.255.250) 0.490 ms 0.397 ms 0.417 ms
4 bar-cmu-ge-4-0-0-2.3rox.net (192.88.115.185) 124.024 ms * 133.533 ms
5 pitt-cl-i2.3rox.net (192.88.115.151) 0.497 ms 0.457 ms 0.472 ms
6 cl2-vlan712.gw.pitt.edu (136.142.2.162) 3.056 ms 0.839 ms 0.633 ms
7 cl-wan5-cl-core-2.gw.pitt.edu (136.142.9.18) 1.300 ms 1.328 ms 1.281 ms
8 nb-cl.gw.pitt.edu (136.142.253.30) 112.341 ms 10.279 ms 106.211 ms
9 * * *
10 * * *
11 * * *
12 * * *
13 * * *
14 * * *
15 * * *
16 * * *
17 * * *
18 * * *
19 * * *
20 * * *
Timed out while processing: /usr/sbin/traceroute 136.142.64.121.
It appears the route goes from CMU to Pitt and then eventually to me (unfortunately, it kept timing out). Neat stuff!
Sunday, September 28, 2008
Thursday, September 25, 2008
Week 5 Muddiest Point
The one thing I was kind of confused about was how/why compression was developed for fax transmission and what type of compression it uses even now.
Week 6 readings
Computer Networks & LAN articles
The first article was helpful in describing the different types of networks since I had only ever heard of local area networks previously. Also helpful was the explanation of intra- and extranet since these are used by my department at Pitt but I never quite understood what they were.
The LAN article brought up an interesting point that networks started basically to save money since at that time, disk space and printers were much more expensive than they are now.
RFID
It doesn't appear that RFID would be that useful in a library setting. For reasons mentioned in this article, it is unclear whether the hassles/problems outweigh the benefits. The article is from 2005 and I wonder what, if anything, has changed since then.
The first article was helpful in describing the different types of networks since I had only ever heard of local area networks previously. Also helpful was the explanation of intra- and extranet since these are used by my department at Pitt but I never quite understood what they were.
The LAN article brought up an interesting point that networks started basically to save money since at that time, disk space and printers were much more expensive than they are now.
RFID
It doesn't appear that RFID would be that useful in a library setting. For reasons mentioned in this article, it is unclear whether the hassles/problems outweigh the benefits. The article is from 2005 and I wonder what, if anything, has changed since then.
Tuesday, September 23, 2008
Assignment 3
Hi - here's my link to my 40 articles on citeulike:
http://www.citeulike.org/user/monicalove
For my elderly and libraries topic, I had to tweak it a little to be able to find articles on citeulike since the searching on there is a bit difficult.
http://www.citeulike.org/user/monicalove
For my elderly and libraries topic, I had to tweak it a little to be able to find articles on citeulike since the searching on there is a bit difficult.
Monday, September 22, 2008
Friday, September 19, 2008
Week 4 Muddiest Point
The one thing I was a little confused about (and maybe I just missed something) was when discussing meta data harvesting and combining various collections and the ability to search regardless of the metadata format, is this the ideal or is this able to be done now? I wouldn't think so since things like the Dublin Core and other standardization methods haven't really been able to come to fruition yet.
Thursday, September 18, 2008
Week 5 Readings
Data Compression - Wikipedia article
I know a bit about data compression because I download a lot of movies and tv shows and they come in .rar files that need to be extracted. My only knowledge, though, was that these files were 'rar-ed' to make them easier to transfer, but didn't really know how compression worked. My question would be when using, for instance, WinRar to extract compressed files, does it detect what kind of files it is extracting and then choose lossless or lossy automatically? Another thing I was surprised about was that data compression was being discussed/theorized about 50 years ago. The mention that 'the idea of data compression is deeply connected with statistical inference' makes it a lot easier to understand now.
Data Compression Basics
This article was quite interesting. The examples of RLE and the discussion of when actual compression will and will not work made the this whole concept easier to digest. I didn't really understand the discussion of entropy encoding and will probably have to give it another read.
Imaging Pittsburgh
This article did a great job of explaining in detail the whole process of creating and bringing to reality this imaging project. This project will only increase access to rare and/or hard to find images of our great city for the general public and for researchers. The article explained the rationale behind many of the decisions made regarding the project, such as what controlled vocabulary to use (LCSH), the use of different data bases for the different groups involved. It clearly laid out the problems and solutions when tackling large digitization projects with multiple organizations.
YouTube and Libraries
The one thing I took away from this article was the idea that using YouTube could be especially good for college students who, like myself, didn't have to take a freshman orientation class. I stumbled around Hillman in undergrad and was not aware of MANY resources that could have been useful for me. Using YouTube may also be helpful for training/instruction in public libraries of our growing aging population. I think that older people will become more and more savvy with computers but may need more training. It could also benefit their health if, for instance, a public library had YouTube instructions on how to search appropriately on MedlinePlus (a consumer health information site). This is just one example off the top of my head but there are countless other ways YouTube could be used to reach out to the aging population.
I know a bit about data compression because I download a lot of movies and tv shows and they come in .rar files that need to be extracted. My only knowledge, though, was that these files were 'rar-ed' to make them easier to transfer, but didn't really know how compression worked. My question would be when using, for instance, WinRar to extract compressed files, does it detect what kind of files it is extracting and then choose lossless or lossy automatically? Another thing I was surprised about was that data compression was being discussed/theorized about 50 years ago. The mention that 'the idea of data compression is deeply connected with statistical inference' makes it a lot easier to understand now.
Data Compression Basics
This article was quite interesting. The examples of RLE and the discussion of when actual compression will and will not work made the this whole concept easier to digest. I didn't really understand the discussion of entropy encoding and will probably have to give it another read.
Imaging Pittsburgh
This article did a great job of explaining in detail the whole process of creating and bringing to reality this imaging project. This project will only increase access to rare and/or hard to find images of our great city for the general public and for researchers. The article explained the rationale behind many of the decisions made regarding the project, such as what controlled vocabulary to use (LCSH), the use of different data bases for the different groups involved. It clearly laid out the problems and solutions when tackling large digitization projects with multiple organizations.
YouTube and Libraries
The one thing I took away from this article was the idea that using YouTube could be especially good for college students who, like myself, didn't have to take a freshman orientation class. I stumbled around Hillman in undergrad and was not aware of MANY resources that could have been useful for me. Using YouTube may also be helpful for training/instruction in public libraries of our growing aging population. I think that older people will become more and more savvy with computers but may need more training. It could also benefit their health if, for instance, a public library had YouTube instructions on how to search appropriately on MedlinePlus (a consumer health information site). This is just one example off the top of my head but there are countless other ways YouTube could be used to reach out to the aging population.
Saturday, September 13, 2008
Link for flickr assignment
http://www.flickr.com/photos/83679611@N00/
My two sets of photos are on the right column and are marked 'master/screen display' and 'thumbnails'.
My two sets of photos are on the right column and are marked 'master/screen display' and 'thumbnails'.
Friday, September 12, 2008
Week 3 Muddiest Point
I'm glad there was a detailed discussion of open source software because I was always curious about Firefox. It is worlds better than IE--takes up less space on computers, has support, more security, etc. The people who developed it must have been highly motivated, along with the people who contribute to it by developing all of the great add-ons and themes (Foxmarks being my favorite). I always wondered how something like this could be free. Does this mean that the developers don't make any money from it or maybe they do from advertising or something? Perhaps this is a goofy question, but I really am curious and am surprised that these developers who make open source software that is so great are doing it for no monetary compensation.
Week 4 Readings
Database article
While I found most of the article a bit above my grasp, it was interesting to learn about the different types of models and think about which model would work for a specific user need. I would have liked the article to have contained some pictures of how these databases actually look on the screen, since I have not worked with them directly. Also, the section on applications could have been longer and I hope there is some discussion of this in class.
Metadata & Dublin Core articles
It seems that the structure of metadata is very important. The more structured an information object is, the better for searching and manipulating that object. This also comes into play when thinking about increased accessibility and expanded use of information internationally (metadata being able to adjust to different end users, i.e. teachers versus school children). This cannot happen if the structure of metadata is not as consistent as possible. The article mentions the different practices for different professional and cultural missions when it comes to creating metadata.
This article was very helpful in pointing out the different ways that the term 'metadata' is used and the different types of metadata with 'real world' examples (Table 1). For someone new to this field, this is very helpful. In the Understanding Information course, we learned about the life cycle of information. It was interesting to learn how layers of metadata are added with every step through the life cycle.
Going back to consistency, this is where the Dublin Core comes into play. While the Miller article was OK, I could not get my head around it and searched for a more consumable (and more current) article. Luckily, I found a great Dublin Core Metadata Tutorial from 2007. Here's the link: www.oclc.org/research/presentations/weibel/20070709-brazil-dctutorial.ppt
This tutorial is from the OCLC Online Computer Library Center and does a great job of discussing metadata, the different properties of Dublin Core metadata, and also discusses syntax alternatives in Dublin Core (HTML, RDF/XML, etc) and their different advantages and disadvantages. Towards the end of the presentation (around slide 70), it provides a history of the Dublin Core and various landmarks. While I am not questioning the use of the assigned reading, it is a draft document from 1999 with numerous errors and after reading these slides I feel like I understand substantially more about metadata and the Dublin Core than from reading the Miller article.
Also, I found a list of terminology which helped me read through the tutorial:
http://dublincore.org/documents/abstract-model/#sect-7. (you may have to scroll down a little ways to #7) I did not know what some of the acronyms stood for that Weibel was used, such as URI (uniform resource identifier).
While I found most of the article a bit above my grasp, it was interesting to learn about the different types of models and think about which model would work for a specific user need. I would have liked the article to have contained some pictures of how these databases actually look on the screen, since I have not worked with them directly. Also, the section on applications could have been longer and I hope there is some discussion of this in class.
Metadata & Dublin Core articles
It seems that the structure of metadata is very important. The more structured an information object is, the better for searching and manipulating that object. This also comes into play when thinking about increased accessibility and expanded use of information internationally (metadata being able to adjust to different end users, i.e. teachers versus school children). This cannot happen if the structure of metadata is not as consistent as possible. The article mentions the different practices for different professional and cultural missions when it comes to creating metadata.
This article was very helpful in pointing out the different ways that the term 'metadata' is used and the different types of metadata with 'real world' examples (Table 1). For someone new to this field, this is very helpful. In the Understanding Information course, we learned about the life cycle of information. It was interesting to learn how layers of metadata are added with every step through the life cycle.
Going back to consistency, this is where the Dublin Core comes into play. While the Miller article was OK, I could not get my head around it and searched for a more consumable (and more current) article. Luckily, I found a great Dublin Core Metadata Tutorial from 2007. Here's the link: www.oclc.org/research/presentations/weibel/20070709-brazil-dctutorial.ppt
This tutorial is from the OCLC Online Computer Library Center and does a great job of discussing metadata, the different properties of Dublin Core metadata, and also discusses syntax alternatives in Dublin Core (HTML, RDF/XML, etc) and their different advantages and disadvantages. Towards the end of the presentation (around slide 70), it provides a history of the Dublin Core and various landmarks. While I am not questioning the use of the assigned reading, it is a draft document from 1999 with numerous errors and after reading these slides I feel like I understand substantially more about metadata and the Dublin Core than from reading the Miller article.
Also, I found a list of terminology which helped me read through the tutorial:
http://dublincore.org/documents/abstract-model/#sect-7. (you may have to scroll down a little ways to #7) I did not know what some of the acronyms stood for that Weibel was used, such as URI (uniform resource identifier).
Monday, September 8, 2008
Wednesday, September 3, 2008
Week 3 Readings
Linux
Before reading this article, I knew very little about Linux other than it was an operating system. I was surprised to learn that it started to be developed in 1969.
Some other thoughts:
1st article
Before reading this article, I knew very little about Linux other than it was an operating system. I was surprised to learn that it started to be developed in 1969.
Some other thoughts:
- The introduction wasn't very helpful so I read some of Chapter 1
- The idea that it recycles code (and the chapter provides a good explanation of what this is) is what makes it so appealing and pretty cool.
- Surprising that "Linux is the only OS in the world covering such a wide range of hardware". I did not know it was so widespread.
- It still seems like it is more useful for programmers and not "desktop users" but it is becoming more user friendly
- The chapter has a great explanation of Open Source and why it's so important in the creation of better software faster.
1st article
- This article contained a good definitely for people who do not know what it is and the fact that you need to be familiar with operating systems in general to use it.
- What are the 'OS religious riots' the author mentions?
- The rest of the reading was too technical
- The Leopard desktop screenshot is incredible looking and I want it for my computers.
- The Prominent Features section shows the neat Dashboard and desktop widgets and I notice a similarity with Vista (which I have on my laptop) but it's not nearly as neat and doesn't have nearly the functionality, it seems.
- The Criticism section was absent and this would have been a useful section to read. What are some criticisms of MAC OS X?
- I read the letter and the comments (the ones I could understand anyway). Now I guess I understand why the IT guys in my department at Pitt haven't bothered installing Vista on our machines. If it is working fine and Windows 7 comes out in less than 2 years, just wait on it. Of course, if they do not know how to work with Vista, this could be a problem. I was worried about my refurbished Dell coming with Vista (I couldn't "downgrade" to XP like I wanted to), but I've had no problems with it and am enjoying using it.
Week 2 Muddiest Point
The one thing I was wondering about concerned the discussion of how a CD works when it is spinning in a machine and how it can be damaged, apparently pretty easily, by a hair, dust particle, etc. Why couldn't there be some type of protective coating or CDs be made from a different type of material that isn't so easily damaged? Perhaps using other materials to make a CD would be cost prohibitive but they have been around for quite some time now, so the technology probably shouldn't be as volatile as it is.
Thursday, August 28, 2008
Week 2 Readings
Computer Hardware
This article was a good read and I appreciate it because, while I use a computer for 90% of my job and a lot outside of work (sports forums, torrents, setting my home wireless network, etc) I never really learned the "nuts and bolts". The easy to understand definitions of RAM, etc. were useful. However, there were too many abbreviations that weren't spelled out in the article - PCI, PCI-E, AGP. I know these have something to do with slots on the motherboard but if someone could elaborate on this, I'd appreciate it! :)
Moore's Law
In all honesty, a lot of this article went over my head so I do not have a whole lot to say about it. The discussion of the future and how Moore's Law applies was interesting. Thankfully the video did a good job in explaining Moore's Law without a lot of technical speak. The discussion of the difference hardware and software improvement was interesting. It is understandable why software lags as far as programming goes but will this always be the case?
www.computerhistory.org
I learned so much reading this site. Most interesting was the Internet history and finding out the origin of things like ASCII, routers, networks, etc. The multitude of high quality images on the site are great, especially the one of the high speed printer from 1975! Browsing some of the articles in Core was interesting, including 'Rescued Treasures'. Reading about all of the exhibits at the museum and then sitting back and thinking about it, it's quite amazing how much has happened with computer technology in the past 50 years.
This article was a good read and I appreciate it because, while I use a computer for 90% of my job and a lot outside of work (sports forums, torrents, setting my home wireless network, etc) I never really learned the "nuts and bolts". The easy to understand definitions of RAM, etc. were useful. However, there were too many abbreviations that weren't spelled out in the article - PCI, PCI-E, AGP. I know these have something to do with slots on the motherboard but if someone could elaborate on this, I'd appreciate it! :)
Moore's Law
In all honesty, a lot of this article went over my head so I do not have a whole lot to say about it. The discussion of the future and how Moore's Law applies was interesting. Thankfully the video did a good job in explaining Moore's Law without a lot of technical speak. The discussion of the difference hardware and software improvement was interesting. It is understandable why software lags as far as programming goes but will this always be the case?
www.computerhistory.org
I learned so much reading this site. Most interesting was the Internet history and finding out the origin of things like ASCII, routers, networks, etc. The multitude of high quality images on the site are great, especially the one of the high speed printer from 1975! Browsing some of the articles in Core was interesting, including 'Rescued Treasures'. Reading about all of the exhibits at the museum and then sitting back and thinking about it, it's quite amazing how much has happened with computer technology in the past 50 years.
Wednesday, August 27, 2008
Week 1 Readings
2004 Information Format Trends
This report was an interesting read. Here are my thoughts:
This report was an interesting read. Here are my thoughts:
- Page 3, 5th para. When reading this, I instantly thought of blogs. While I know there are some legitimate blogs out there, the majority seem to be just people posting their opinions. What could become detrimental is people (young people) using blogs as a source for research purposes instead of more reliable ones.
- Page 7, 3rd para. I was suprised that 61% of people surveyed for the Blogads report found blogs to be more honest. There don't appear to be any editorial rules regarding bloggers, so I'm surprised that people assume so much honesty.
- The list of new vocabulary was very helpful. There were 6 words/concepts I had never heard of before.
- Page 4, 3rd para. and Page 5, 1st full para. While it is true that understanding of information technology skills is very important in every sector of society, I do not think such specialized knowledge can be had by everyone. This is why information specialists will become increasingly important in every field. The skills listed on Page 5 - indexing techniques, organizational systems, etc. -may be unreasonable for everyone to know and that's why information specialists with these skills will be so important in medicine, engineering, etc.
- This article was very detailed in explaining how the Lied Library was developed and how it was able to become a first class high-tech research library. While it went into great detail explaining what new computer programs were used, what program upgrades were done, etc., the lack of discussion of the human element was noticeable in the article. Did the library staff have to be trained in the new programs? One would think they would be expected to be well-versed in the new systems to be able to help students. A companion article covering the same time period that discusses the creation of this library (and everything it entailed) through the eyes of a librarian would be interesting.
Week 1 Muddiest Point
Admittedly, my muddiest point from the lecture was the blog feed. I've never really read blogs nor subscribe to RSS feeds (I'm assuming RSS feeds and the blog feed link that we're posting are one in the same) and I wasn't really sure what they are. I haven't done any research on my own yet to get to the bottom of this but I plan to. If anyone wants to help, I'd be grateful!
Subscribe to:
Posts (Atom)