Wednesday, February 27, 2008

Thank You Paull Young!!! And G'day, Mate!

We were fortunate indeed to have Paull Young, Senior Account Executive for Converseon, Inc., a social networking consulting firm, come up to Fordham and give a talk to our class about professional opportunities associated with social networking. He was a dynamic and engaging speaker, especially for an Aussie (just kidding there), highly knowledgeable, and I know everyone learned a great deal from him.

If you click on the link, you'll find that Converseon lists among their services conversation mining (monitoring online conversation about a product or brand), affiliate and search marketing (including search engine optimization), brand reputation management (public relations extended to the online environment), and blogs and social media.

Paull has been blogging for many years now, and his own blog focuses on public relations, and appropriately enough bears the name, Young PR.

An interesting concept that he introduced was astroturfing, which is the counterpart, in a sense of a grass roots campaign. With astroturfing, what appears to be a grass roots initiative, or messages produced by private individuals, is secretly the product of organized effort, work done for hire, on behalf of a political group or corporation. Paull provided us with an example that is both hilarious, reminiscent of Terry Gilliam's animation for Monty Python's Flying Circus, and at the same time highly sinister because it masquerades as something done by some guy in his basement who just doesn't like Al Gore, but was actually produced and paid for by commercial interests. Here's the video:





Well, the good news is that Paull and some of his colleagues got together to set up an Anti-Astroturfing site and campaign. And just to be clear, he's a list of definitions from their site:

Definitions

From Wikipedia: In American politics and advertising, the term astroturfing describes formal public relations projects which deliberately seek to engineer the impression of spontaneous, grassroots behavior. The goal is the appearance of independent public reaction to a politician, political group, product, service, event, or similar entities by centrally orchestrating the behavior of many diverse and geographically distributed individuals.

From answers.com: Astroturfing describes the posting of supposedly independent messages on Internet boards by interested companies and individuals In American politics, the term is used to describe formal public relations projects which deliberately give the impression that they are spontaneous and populist reactions. The term comes from AstroTurf -- the fake grass used in many indoor American football stadiums. The contrast between truly spontaneous or "grassroots" efforts and an orchestrated public relations campaign, is much like the distinction between real grass and AstroTurf.

From the Jargon File: (The Jargon File is a compendium of hacker slang)
astroturfing: n.
  1. The use of paid shills to create the impression of a popular movement, through means like letters to newspapers from soi-disant 'concerned citizens', paid opinion pieces, and the formation of grass-roots lobbying groups that are actually funded by a PR group (AstroTurf? is fake grass; hence the term). See also sock puppet, tentacle.
  2. What an individual posting to a public forum under an assumed name is said to be doing.
Oh, and here's their logo:


It is certainly a pleasure, and very much in keeping with our outlook here at Fordham University, to be dealing with professionals who have a firm commitment to ethical practices and a reflective approach to their business.

Anyway, just as another example of the new and powerful phenomenon of social networking, Paull gave us the example of one of the most popular recent videos on YouTube, "Star Wars according to a 3 year old," which at the time of this writing, is up to 2,892,082 views!!!! It is an altogether charming little home movie, I must say:




Paull also showed us the highly successful YouTube campaign "Will It Blend?" which promotes BlendTec Total Blenders with the kind of stupid human tricks that David Letterman is known for. Here's the example he showed us, featuring Chuck Norris:





And Paull showed us one of Converseon's projects, Second Chance Trees for American Express, which was set up on the Second Life, the 3-dimensional virtual reality social network, where they created a place on Second Life for people to enjoy, and gave people an opportunity to buy trees that would be planted both in the virtual world where they can see them, and in the real world where they otherwise would not be able to see the results of their donation. Anyway, here's the YouTube video on the project, which interestingly includes "machinima" among its tags:





And guess what? There's another YouTube video featuring a presentation by Paull Young on this project, so let's take a look at our friend here:



Not surprisingly, Converseon also has its own blog. And Paull also mentioned another website/blog worthy of our attention, FORWARD.

Interestingly, the fact that Paull spells his name with a double "l" came up, along with the point that it turned out to be fortuitous because otherwise he would not be easy to pick out from all the other Paul Youngs when his name is googled. And that gave me the idea that in the future parents will want to give their kids unique names, in order to optimize their kids for search engines--in this way, technology may alter the time honored traditions by which we name our children. And you can probably say goodby to John Smith!

In sum, Paull gave us lots to think about, and lots to explore. So please join me in saying, G'day Mate!!

Tuesday, February 26, 2008

A Little Bit of Everything

I must admit that I never finalized my "draft" from the previous postings so, I'm a week behind on my thoughts. To make it up to you I will include some fun materials I've come across since then throughout my post!

First and foremost, here is the Barack Obama music video I love so much, "Yes We Can" which was inspired by a speech he gave in New Hampshire.


Now for a little business(from last week):

Bolter says that "Graphics have played a role in printed books since the 15h century. With some important exceptions, such as atlases printed books have firmly asserted the primacy of alphabetical text. Printed books contain illustrations; they are texts...As our culture moves toward a greater reliance on electronic graphic presentation, the qualities or printed prose are being displaced or marginalized."

I do agree with Bolter that printed texts prove primitive except for the on going debate for people's preference to be able to hold a book in hand; there is something nostalgic and comforting about holding a book. This, I believe, will stay with our generation but such sentiment is sure to dissipate with future generations. Think about how there are already massive transcriptions of texts online such as Google books or the movement of periodicals and journals to online Databases. Google Book Search intends to work with publishers and libraries to create a comprehensive, searchable, virtual card catalog of all books in all languages toping out with a goal of 30 million texts over the next ten years. Online formats allow for delineated media and research via hypertext and hypermedia. There is an increasing preference towards graphics and video which leads me to believe for anyone to hang out in a library cross referencing and perusing through fictional novels seems something of the past if it has become available from the comfort of their own home.

Funny enough, by going onto Google Book Search, one of the first suggested texts to pop up was entitled "Rethinking Context: Language as an Interactive Phenomenon." It's the almost the entire text and relevant to out discussion! I just skimmed through but here is the link!:


And for my next treat:
comp comic

Now, to discuss Cybertime-
I had a question for clarification and I apologize if it seems elementary. Lance Strate says, "it is generally accepted among scientists an philosophers that time does not exist independently of action, motion and event, but is in fact generated by physical change (hence, time's relativity in relation to speed)." So, am I correct in saying that you MUST have motion/ speed in order to have time and that without motion, action, or event, that space (to most scientists and philosophers) might be considered to be a vacuum? I seriously think myself in circles with this.

In response to cybertime, I think that it has proven a great way to put into perspective the space in which we are interacting and placing data. Strate has presented an enormous amount of information and viewpoints throughout his essay, "Cypertime." The observation that VR does not change within time goes along with the idea of the virtual self and a previous discussion of creating a digital self/ society who could live on past human existence. I appreciate Strate's acknowledgment of how "although our physical selves are subject to the ravages of time, our dream selves are the masters of cybertime." He is correct in saying that meeting with out data doubles might inevitably be disturbing however again to continue on with the previous conversation of the perfect recreation of the digital self, would you not want to be best friends? okay just kidding.

Also, to pose a question, if we did indeed live in a surveillance society and a metadata organizer could sort and compile all the information traceable back to us, could a data double be created to simulate you? Information will eventually be available from an entire lifetime for some individuals and "clones" or AI who learn to simulate your being seem like a sci-fi meats horror film. After continuous discussion regarding these types of issues I almost think it will almost be inevitable.


And now for a video that I’m sure Ted will appreciate because it proves the harmful effects of Myspace (specifically in young children). This video is a bit disturbing and yet I'm sure the kid was a bit provoked but watch for a couple minutes. He answers a few questions about his habits amidst all the chaos. Also please note that this video is a bit offensive, I couldn't even watch it all. There is a bunch of cursing and brief nudity. Not in my usual taste but relevant to our ongoing discussion. So without further ado, here is "Kid Brother is addicted to Myspace"

Her are suggested spots to watch if you don’t want to be patient: 1:14, 1:50, 2:07, 2:50, 3:11-3:20, 4:11 and 6:30.

Justin Cybertime

Many of the ideas Lance Strate possesses about cybertime encompass the idea that much like a human being, the internet lives in the present, everchanging and adding to itself much like the human body. The display that we all see on our monitors shows only one time, now. This is much different than clocks of old, where you get the impression of time passed and time to wait. Strate refers to this notion in saying that, "the digital display is entirely present-centered, representing the time as nothing more than a discrete quantity." People can relive the past or look toward the future as much as they want, but they still exist in present time.

Not only does today's advancing internet cover the span of all time zones, it also seems to eliminate many time barriers and deadlines associated with technology before it. Looking at anything that is even a few days old can be considered looking at ancient history in cybertime terms, because so much will be added and updated every second. While by the same effect worrying about what the future may bring is pointless (Internet2), because of the massive amount of information and advancements that will occur as time passes leading up to anything in the near future.

One point about crossing over time zones is the obvious interaction between people in far away places of the globe. Clearly, this occurs everyday in games, chat rooms, message boards, etc. If one was to play an online game with participants from around the globe for a full day, you will see when each time zone has its most popular hours of gaming. If you find yourself up at say 5 am and can't fall asleep (it happens around here), then you will not have to worry about finding friends or gamers online to compete with.

It seems that the most obvious overcoming of time barriers is through email communication. To me e-mail erases a sturdy sense of time with regard to mail that was in place for centuries leading up to the breakthrough of the internet. The fact that you send the email at the same instant as the person recieves it shows that time has been erased, and if it took no time to gather your thoughts, you could have a normal conversation with some one across the globe as if they were sitting in your living room. Email has erased any burden of time on global communication.

When referring to the idea of deadlines being completely altered I picture any large newspaper corporations. The employees for these publications must work around the clock leading up to a deadline for the next morning where all the news is cut off, and placed into the same process for the next day. The fact of an alotted period of time to fit as much news from the day in as you can brings about a sense of time running the business. However, in cybertime a person can constantly update as the news updates, without the constriction of basing your news around a specific time on a specific date in a specific time zone. The difference shows how cybertime is much more concerned with the present and the deadlines that exist on the internet are every second. By constantly updating and changing form, these internet news publications are living in the now as much as you or I would. While working for a newspaper conforms a person to always looking ahead to the next day or trying to recap the prior day.

Although blogs like this one and message boards all over the internet show past posts with an attached time, I do not believe this exhibits the passing of cybertime or an idea of concentrating on the past. This is because I consider these type of Internet discussions to be living organisms. With every post from the past there are new posts that can come in at any second from any place in the planet. The fact that new posts are constantly confirming, challenging, or responding to those prior, I believe that as the blog changes constantly as a whole, time in cyberspace is being overcome at every turn.

The Times They Are A Changing

Lance Strate brings up some very heavy ideas in his piece about 'Cybertime'. The apparent space bias discussed shows our prevalence to denote things in terms of the space they occupy. This thinking has passed over to the realm of computer technology and the Internet, infiltrating the design, ideology and terminology which it functions in. Indeed, Professor Strate remarks how politicians (and inventors like Al Gore) use terms such as "information super highway" to conceptualize or explain the Internet. This predisposition is what sets up the direction of the essay in discussing the true and grossly underestimated importance of time to both the Internet and the computer.
An extremely powerful correlation made is that of the mechanical clock and the computer, this initial relation ties in many of the points included later on in the essay about how essential and versatile time can be.
"As clocks became common, became not merely useful but unavoidable. Men and women began to work, eat, and sleep by the clock...as soon as they decided to regulate their actions by this arbitrary measurer of time, clock was transformed...into a necessity of urban life" (363)
Among other shared characteristics of the clock and the computer are the measurement of duration, the ability to determine present time and the foresight to provide alerts. However, Lance Strate is careful to point out that there are just a few differences...Since the computer is time through the digital format it is in essence a decontextualized form of time, where only the present moment is of concern. Moreover, the establishment of digital time as something created and "malleable" through its existence as data in the computer makes for it to be something that is not beyond the reach of the computer operating system (the "micro-world") to control what time really is. In this sense it does sustain a religious connection to it, that everything is functioning because of a God, a elemental force keeping the balance going, that is until...this world shuts down and everything freezes and time along with everything else has stopped.

While I may at first find this a stretched out point, I don't find myself hard pressed to see this point in the Y2k hysteria. Like a digital judgment day of destruction, this concern of a mass faulty set back in computer systems was one independent of the physical construct of our systems of time. Programmers and experts in the real world trying to alter the reality and working order of the computer micro-world could only hope to play the role of divine intervention and stop this independent progress of time (luckily it was all good).

Another point is the one Lance Strate brings up about the concept of the future in computers. Citing how computers can run simulations of stock markets and changes in systems to demonstrate the passing of time, these processes are done in the present and as such are invariably a reflection of the future within the context of the now. In this sense, digital time is not beyond that of real time. I'll be the first to admit that at times I was lost in the scope of the dense topic of cybertime, but i do believe this demonstrates the duality of cybertime as both existing in conjunction with the real world and expanding independent of it.

1 or 0: On or Off

I enjoy the passage from Rifkin which states:
The really good video game players are able to block out both clock time and their own subjective time and descend completely into the time world of the game.

And later:
Long-term computer users often suffer from the constant jolt back and forth between two time worlds.

I never thought about time in this manner, but it makes perfect sense to me. In my own experiences, I often feel "jolted" when my thoughts leave the computer and return to reality. I can often be sitting at my computer hours into the night and not feel any sleepiness or tired at all. But as soon as my eyes, glance down at my clock, all of a sudden, I feel like I'm about to fall asleep at the keyboard. It's an unsettling feeling, that return from cyber to real time.

Perhaps, its because the descent into cybertime leaves behind the traces of fatigue real time can enforce. Cybertime has no space for sleep. Computers, once on, are always on, always occupying themselves with some computation. For a computer, its 1 or 0, on or off, there is no state like sleep a computer can relate to. In that same way, we forget sleep when we enter that cyber timezone.

Internet Time

We all already know that the great thing about cyberspace is that people can communicate with each other no matter what time zone they are in. Lance Strate says in the chapter, “Cybertime”, “there is a tendency to experience them (e-mails) as if they were being communicated in the present. This sense of immediacy can also be present when reading other people’s electronic discussions in the archives of bulletin boards, listservs, and so on” (379) and then Dery writes that the things the reader is reading is taking place in real time. I can see how email and communicating with other people on the Internet makes communication instantaneous, though when looking back through old bulletins or conversations, wouldn’t that just be similar to looking through documents from, say, a book? They are all archived there for people to look through, I guess because there is still the use of timestamps on them. I suppose it would apply more to interactive activities on the Internet in a virtual world, such as the example Rifkin gives about gamers creating a different sense of time while playing video games.

On another note, I found this statement to be interesting: “I believe that we will eventually find ourselves referring more and more often to Greenwich mean time, global time, or simply Internet time, rather than our regional time, and that new timepieces will be widely adopted that are capable of receiving broadcast time signals, thereby maintaining synchronization with the world clock” (368). Internet time has been around since 1998, though I don’t see much of a difference it is to real time. The way that the day is split up is different—24 hours is split up into 1000 “.beats”, with each .beat measuring to be 1 minute 26.4 seconds. The only thing advantageous about this system that I can see is that it takes away the restraints of time zones; it is still in a way, similar to any other digital clock and uses different measurements, though still within a 24-hour period.

I first saw the use of Internet time on an online art community. Essentially, the reasoning behind this site's use of Internet time was to correspond with the fact that it is an international online community, and thus, should go according to Internet time. Additionally, it takes away any disputes that could occur between art submissions, with questions of who posted what at what time zone and so on, which I guess is pretty helpful in any kind of online community where users can put up submissions. Here's the site, they keep the time up at the top -- GFXArtist

Monday, February 25, 2008

When Is Lance Strate?, or We Have All the Time in the World

Maybe you've downloaded something from iTunes, or a favorite Web site or (dare I say it?) BitTorrent and resented the time it took to download the desired file, song or movie? Maybe you wanted to back up your files to an external hard drive and a little dialogue box pops up to inform you of the time remaining? It's interesting to notice that it gives you a relative time duration. The last time I backed up my MP3's my computer told me that it would take 200 minutes. After a few seconds, the numbers started dropping drastically till it settled on 43 minutes. I noticed that the only consistent measurement was the amount of megabytes/gigabytes being transferred. The time was relative to the speed of the connection and the upward limit of the amount of information that the connection could handle. While the events still progressed in "Real Time," and in fact it did take 43 minutes for all 32.8 gb of my music to be saved, the time calculation shifted constantly. Sometimes it assured me 30 minutes, sometimes 50. The computer didn't care that the download happened in my 50 minutes, it only cared that it would be done when it was done, the time was only there for my benefit.

It's this weird phenomenon of relative time that Lance Strate talks about in his essay on "Cybertime." The computer is ruled by technical limitations, it creates it's own time. The internet only adds to that. The events still happen in "Real Time" but they're ruled by the computers that process them. A good example is sending an e-mail. I've sent e-mails to people on different e-mail clients and find that sometimes an e-mail will take far longer to reach one person than it does another. With AOL and gmail, it's almost instantaneous. Some hotmail users might take a few minutes. Sometimes, the e-mails don't arrive for hours. It doesn't change the content or the meaning, but can bring messages out of context. Imagine a series of important e-mails where the key piece doesn't arrive until after the conclusion?

Computer time doesn't work on a strict progression of cause and effect, it works on many levels of running information back and forth. Most often that information works withing the normal time framework we're used to. Sometimes, it doesn't. I'm sure we've all clicked on a program on the desktop, and the application loads slower than we'd like, but the computer remembers the strokes and clicks you've made and adds them as soon as the program is open.

Cybertime--

While I agree with Lance Strate on the electronic place speeding up human concepts of time, "cybertime" exists in the same world in which humans live. In other words, I wake up in the morning, 9 A.M. EST and log into google to check the daily news. At that very moment, it is 9 P.M. elsewhere and some Chinese family has just finished dinner and logs in to hear their favorite internet radio broadcast or log in to chat with friends. Their cyber experience will always exist within the actual world. All computers have a clock in the corner of the screen, grounding the cybertime in real time. After I finish this blog, as soon as I "publish" the post that time will forever be attached to this writing.

The speed of the internet and the conception of the nanosecond is for the most part unexplored by common internet users, and for good reason. Not only are they too technical and impractical to deal with, but the speed of the process of the internet is cool but not pertinent to accessing information. The time it takes to type the web address and the time it takes to load a webpage is the speed of the internet. Although thinking about binary code does kinda flip your mind: zero zero zero zero zero zero one zero zero zero zero zero one one zero zero zero zero one one one zero zero zero one one one one (Flight of the Concords "The humans are dead" reference; if you haven't seen it...do yourself a favor)

The synchronization that occurs from the world clock and the Western idea of time has pushed industrialization to create and influenced mankind to imagine things like the printing press, telegraph, telephone, computer and the internet. The speed and magnitude of information one the internet creates a flowing, streaming fountain of knowledge, automatically updated and covering the entire globe. When Lance discusses Experiencing Time: Computing as Activity and Event, he cites video games and virtual reality as a way to "lose yourself" in cybertime and spend hours on end staring at a screen. Many of my friends comment on how easy it is to spend hours looking through facebook or "surfing" the web. Is this different from the intoxication glow of the Television screen? or the digital IMAX that has pushed the limit of quality movie projection? I think the reason the internet is different is because the information is pertinent and customized to the users wants as well as the endless web of sites that would take an eternity to fully explore.

There are still concepts of past, present and future on the internet. One thing teachers have been telling me since high school is always "know when something is written". Even more so on the internet, it is important to understand source reliability as well as the timetable of events surrounding the information. In this way, I think cybertime cannot be seen as religious in any humanly spiritual way; it turns us all into gods of our domain. Whenever we want to log on, any information you want to access is at our fingertips, creating what Strate calls "dream-self doubles...masters of cybertime". The "more integrated, metadimensional sense of self" is the future of human interaction with information, education and each other. The global network and McLuhan's "tribal village" is very real and connects the whole world instantaneously to information with unlimited potential for growth.........given time.................and space (The internet is running out of room, btw)

Virtual Future?

Virtual time yields yields virtual futures...

We can create virtual programs, and we can predict and form the future in a virtual world, but can we really determine the future of reality using these virtual programs? In Chapter 22, Lance Strate talks about predicting the future through the use of virtual programs. We can use virtual programs to stimulate the future in the present and "Given that knowledge, we can then act to control the future; and with or without forecast, the program creates a pre-determined future." Although we can see into the future, it is important to distinguish the difference between reality and virtual reality. I liked how Strate incorporated a a passage about Gary Gimbert who discussed how we can use film as a medium to create illusions from the past or the future that exist in the present. We can create films that will visualize our future through a simulated environment that is similar to our reality. For instance, one of the best movies that explores virtual reality is 'The Matrix'. The characters live in a virtual reality because their true reality was so polluted and gruesome that they learned to coincide in a virtual world.

Although I agree with virtual programs simulating the future, I was unclear about how this can form a pre-determined future. We can simulate a program to predict our future, yet I agree with Kurt Vonnegut that it is impossible to change or create the future. Why are we searching to break away from time in the natural time world? Natural time has been an ancient tradition dating back to the Aztecs capturing light through prisms to depict the time of day and year. I believe it is important that we stay within these boundaries and avoid trying to pre-determine our future in cyberspacetime.


The Future in Cybertime

I agree with J. David Bolter when he says that the computer is an extension. Chapter 22, written by Lance Strate is a great topic, becuase although it is so much apart of everyday life, it is easily overlooked. The concept of time and clocks is not very complicated, when someone wants to the know the time they either look at their watch or any other digital device that is handy. The clock was bound to be invented sooner or later simply because it's much easier than using a sundial. When clocks began to make their way into society, they were very expensive just as early computers were. As clocks became popular, men and women would live by them. In urban life they were so essential, because people would eat, work, and sleep by the clock. Today a clocks purpose has become basically obsolete mainly because phones and computers can display what time it is as well as having many other functions. Cybertime introduces us to a new type of telling time. Cybertime is absolute time, ditgital time, and quicktime. The computer has contributed the processing of information at electric speed. While digital time is still being used, cybertime has become the best and most efficient way of taking down data, receiving information, and communicating with others world wide. While ordinary clocks are produced with identical series of minutes, hours, and seconds, cybertime works with nanoseconds, and this marks a turning point in the way human beings relate to time. Never before has time been organized at such a precise time, at a speed beyond the realm of conciousness.

Linear Time versus Polychronic Time

In a traditional sense of the word, we see time as a linear progression from one point to the next. It is two o'clock before it is three o'clock, and an event has to begin before it can end, etc. In Lance Strate's Chapter 22 on Cybertime, he refers to the work of Marshall McLuhan and his description of polychronic time. If linear time describes a straight line progression from one point to the next, then polychronic time would involve the progression of multiple lines at the same time. The idea that the internet has opened doors and changed the way humans can experience time and space is unavoidable. Strate gives concrete examples such as the "new nanosecond culture" and "time-sharing" which are obvious dimensions invented by the versatility of cyberspace and the internet. Strate adapts the phrase "cyberspace" from the creation of the nano-second because of the existence of a segment of time that takes place beyond human consciousness. A nano-second could pass (and does) and we as humans would have no way to consciously realize this. The idea of time sharing is another component of McLuhan's theory of ploychronic time. Instead of one person being able to access one network, or one mainframe computer, multiple people are granted access at the same time. Even a document that is not duplicated through publication can be view by multiple people at any given time if they are granted access to the network where the document exists. Although Strate uses concrete evidence to formulate his argument for the evolution of time with the creation of the internet, I question the actual relevancy of the idea of "cybertime." The creation of the internet has obviously opened new dimensions to the speed in which we can experience the world (or whatever is put onto the net), but has it changed the fundamental framework of the clock and the way in which we view time? It is my belief that the clock, and measurement of time in a 24 hour day, is a timeless invention. The clock which tells us what time it is, when we have to leave our house, when we have to go to bed, and even wakes us up out of a sound sleep, will never evolve past its already established boundaries. No matter how fast and broad the internet becomes, it will still take us just as long to read an article as it always has, and just as long to write a page as it always has. Information is obviously more accessible, and at a faster rate, than it has ever been, but that does not mean that true time has to become shorter? We still have a 24 hour day, a 60 minute hour, and 60 second minute. The point of the internet is to broaden the availability of accessible information so that we can accomplish more and reach farther than we have ever been able to, not to rush through life and increase the speed of time.

Virtual Reality and Community

In chapter 6, we learn that Alduous Huxley coined the term Virtual reality in his classic book Brave New World. Charles U. Larson goes on to explain what Huxley meant by these "Feelies". A feelie was a person who entered a booth, and after planting their hands on an electronic contact board, they would experience sensations identical to whatever was happening. This type of feeling was like an advanced form of a movie, because now all five senses are being moved, rather than just seeing and hearing. Although the idea of Virtual Reality seemed far fetched when first introduced, we see it being applied more rapidly day by day. Critics of virtual reality warn of potential overwhelming , and wonder how many hours a day people will want to spend in virtual reality. The central concern is that the phone, computer and television use interactivity within some community while virtual reality limits the sense of community that has been accompanied by earlier media. As Burke describes in his theory of Virtual Act, that what happens to a person during this reality is private and interpersonal, they are similar to dreams and fantasies. If these instances are only experienced by the participant, will sense of community fade away? Virtual Reality tis very cool and seems great. However, everything should be done in moderation, if we were to get to carried away video games and airplane cockpit learning would be old news, and other things such as learning, sex, and eating may become virtual and that would take away from personal relationships and community.

The Chronicles of Riddick Was a Crappy Movie Anyway

Going through Prof. Strate's bulletin board on our trusty "Interactive Rams" group I couldn't help but read his re-telling of the "YOU" poem incident. I think Prof. Strate's strong points coupled with the accuser's continuous poor grammar more than reinforce the honesty of professor Strate's claims. But moreover, I couldn't help but wonder if this guy was accusing the poem of being stolen after it was received so publicly. By this I mean, if the poem hadn't garnered any attention at all would this issue even had come up? This got me recalling a documentary clip that was shown to me a year ago by Professor McCourt in my Electronic Media class. This incident Professor Strate posted about demonstrates the sticky and ambiguous nature of ownership, copyright and trademark in the digital domain. With so many more venues on the internet expanding the realm of creativity and expression, the notion of originality and ownership continue to become blurred. The clip I was shown is entitled "Amen Brother Break" and deals with the complexity and significance of these issues. 
The basic outline of the clip traces the far-reaching history of a drum sample known as the Amen Break. The sample originates from a 1960's song entitled "Amen Brother" by the music group, The Winstons. Yet despite being created over 40 years ago, the break has permeated successfully into various musical genres of our culture in the last decade; including Hip-Hop, Electronica, Rock and R&B. The clip shows how thanks to the advent of sampling technology this break has been transformed in a variety of ways impacting numerous types of musical genres and sub-cultures, even going as far as reaching the advertising realm. Furthermore, it was the freedom of this musical sampling and reconstruction, due primarily to the ineffectiveness with which sampling was seen that allowed such a prominent period of growth through this break. I don't want to spoil the main points this clip brings up so I won't go into it much further, but it brings up powerfully interesting ideas about the importance of creative freedom in the face of copyright and the extent to which it should overplay or underplay the use of sampling. 
I think powerful connections can be made since so much of this piece addresses issues of ownership and use in the digital realm. I hope you all enjoy!



PC Internet of the Future!

I read a tasty article from the New York Times today chronicling Abobe's efforts in consolidating things like eBay, NASDAQ, and FedEx all on the PC while seamlessly incorporating the internet. It is as if using a PC, lets say just the desktop, would be like using the internet, only broader. The platform you choose (XP, Vista, or some crappy Apple creation) would be able to function like a working web browser. It reminds me some of widget technology, but what Adobe is trying to do is make it less link-oriented, like a widget, and turn those widgets into working, internet powered functions. The actual program, which was designed by one guy, is called AIR. Flip flop back and forth from desktop to web, exchanging information seamlessly. The article did say there was competition, as there almost always is, but this is a new product still. It's like having a desktop thats your homepage. Pretty cool stuff.

Heres the article: Right Here

Tuesday, February 19, 2008

Literary Quite Contrary, How Does Your Gardner Grow?

Howard Gardner is a well known and highly respected scholar in the field of education, and educational psychology. He's best known for his theory of multiple intelligences, which argues that, rather than there being one single "thing" called intelligence as is implied by the IQ test (a test that measures something that was never clearly defined), there are many different intelligences, for example verbal, mathematical, scientific, musical, visual, social and emotional, etc. Different people may be better or worse in any one of these, or any combination. Interestingly, for me, his theory was influenced, in part, by the highly uneven combinations that give rise to the autistic savant, an individual who may be on a genius level in mathematics or visual expression, for example, and extremely poor in verbal, and especially social intelligence.

So anyway, this morning as I was reading our local paper, The Record, (traditionally referred to as the Bergen Record, but now officially the North Jersey Record), I was delighted to see an op-ed piece by Gardner on literacy and the new media (the piece is not listed as a reprint from another paper, as is sometimes the case, but appears to have been commissioned by The Record, to its credit).

Gardner is certainly a media ecologist, as he puts literacy in an historical context going back to prehistory (although I find his chronology to be slightly off, as writing was developed a bit over 5,000 years ago, and other forms of notation are significantly less than 100,000 years old. But what's a few millennia among friends? More importantly, he discusses how different communication technologies, different media in other words, give rise to different types of literacy.

And please note that, at no point does he use the term literacy as a metaphor. He's not talking about some vague notion of media literacy, visual literacy, or even computer literacy. He's talking about the ability to create and understand written texts, about knowing your ABCs and minding your Ps and Qs.

And so, without further ado, let me give you over to Howard, in this think piece that's entitled: Gardner: Reading, R.I.P.? and appears on p. L7 of today's Record (February 19, 2008):

Computers, pessimists maintain, are destroying literacy; optimists foresee the Internet ushering in a new, vibrant participatory culture of words.

WHAT WILL HAPPEN to reading and writing in our time? Could the doomsayers be right? Computers, they maintain, are destroying literacy. The signs -- students' declining reading scores, the drop in leisure reading to just minutes a week, the fact that half the adult population reads no books in a year -- are all pointing to the day when a literate American culture becomes a distant memory. By contrast, optimists foresee the Internet ushering in a new, vibrant participatory culture of words. Will they carry the day?

Maybe neither. Let me suggest a third possibility: Literacy -- or an ensemble of literacies -- will continue to thrive, but in forms and formats we can't yet envision.

That's what has always happened as writing and reading have evolved over the ages. It was less than 100,000 years ago that our human predecessors first made meaningful marks on surfaces, notating the phases of the moon or drawing animals on cave walls. Within the past 5,000 years, societies across the Near East's Fertile Crescent began to use systems of marks to record important trade exchanges as well as pivotal events in the present and the past. These marks gradually became less pictorial, and a decisive leap occurred when they began to capture certain sounds reliably: U kn red ths sntnz cuz Inglsh feechurs "graphic-phoneme correspondences."

A master of written Greek, Plato feared that written language would undermine human memory capacities (much in the same way that we now worry about similar side effects of "Googling"). But libraries made the world's knowledge available to anyone who could read. The 15th-century printing press disturbed those who wanted to protect and interpret the word of God, but the availability of Bibles in the vernacular allowed laypeople to take control of their spiritual lives and, if historians are correct, encouraged entrepreneurship in commerce and innovation in science.

Criticism and celebration

In the past 150 years, each new medium of communication -- telegraph, telephone, movies, radio, television, the digital computer, the World Wide Web -- has introduced its own peculiar mix of written, spoken and graphic languages and evoked a chaotic chorus of criticism and celebration.

But of the changes in the media landscape over the past few centuries, those featuring digital media are potentially the most far-reaching. Those of us who grew up in the 1950s, at a time when there were just a few computers in the world, could never have anticipated the ubiquity of personal computers (back then, IBM's Thomas Watson famously declared that there'd be a market for perhaps five computers in the world!). A mere half-century later, more than a billion people can communicate via e-mail, chat rooms and instant messaging; post their views on a blog; play games with millions of others worldwide; create their own works of art or theater and post them on YouTube; join political movements; and even inhabit, buy, sell and organize in a virtual reality called Second Life. No wonder the chattering classes can't agree about what this all means.

Here's my take.

Once we ensured our basic survival, humans were freed to pursue other needs and desires, including the pleasures of communicating, forming friendships, convincing others of our point of view, exercising our imagination, enjoying a measure of privacy. Initially, we pursued these needs with our senses, our hands and our individual minds. Human and mechanical technologies to help us were at a premium. It's easy to see how the emergence of written languages represented a boon. The invention of the printing press and the emergence of readily available books, magazines and newspapers allowed untold millions to extend their circle, expand their minds and expound their pet ideas.

Inhabiting fascinating worlds

For those of us of a 19th- or 20th-century frame of mind, books play a special, perhaps even spiritual, role. Works of fiction -- the writings of Jane Austen, Leo Tolstoy, Toni Morrison, William Faulkner -- allow us to inhabit fascinating worlds we couldn't have envisioned. Works of scholarship -- the economic analyses of Karl Marx and John Maynard Keynes, the histories of Thucydides and Edward Gibbon -- provide frameworks for making sense of the past and the present.

But now, at the start of the 21st century, there's a dizzying set of literacies available -- written languages, graphic displays and notations. And there's an even broader array of media -- analog, digital, electronic, hand-held, tangible and virtual -- from which to pick and choose. There will inevitably be a sorting-out process. Few media are likely to disappear completely; rather, the idiosyncratic genius and peculiar limitations of each medium will become increasingly clear. Fewer people will write notes or letters by hand, but the elegant handwritten note to mark a special occasion will endure.

I don't worry for a nanosecond that reading and writing will disappear. Even in the new digital media, it's essential to be able to read and write fluently and, if you want to capture people's attention, to write well. Of course, what it means to "write well" changes: Virginia Woolf didn't write the same way that Jane Austen did, and Arianna Huffington's blog won't be confused with Walter Lippmann's columns. But the imaginative spheres and real-world needs that all those written words address remain.

I also question the predicted disappearance of the material book. When they wanted to influence opinions, both the computer giant Bill Gates and the media visionary Nicholas Negroponte wrote books (the latter in spite of his assertion that the material book was becoming anachronistic). The convenience and portability of the book aren't easily replaced, though under certain circumstances -- a month-long business trip, say -- the advantages of Amazon's hand-held electronic Kindle reading device trumps a suitcase full of dog-eared paperbacks.

Books in jeopardy

Two aspects of the traditional book may be in jeopardy, however. One is the author's capacity to lay out a complex argument, which requires the reader to study and reread, following a circuitous course of reasoning. The Web's speedy browsing may make it difficult for digital natives to master Kant's "Critique of Pure Reason" (not that it was ever easy).

The other is the book's special genius for allowing readers to enter a private world for hours or even days at a time. Many of us enjoyed long summer days or solitary train rides when we first discovered an author who spoke directly to us. Nowadays, as clinical psychologist Sherry Turkle has pointed out, young people seem to have a compulsion to stay in touch with one another all the time; periods of lonely silence or privacy seem toxic. If this lust for 24/7 online networking continues, one of the dividends of book reading may fade away. The wealth of different literacies and the ease of moving among them -- on an iPhone, for example -- may undermine the once-hallowed status of books.

But whatever our digital future brings, we need to overcome the perils of dualistic thinking, the notion that what lies ahead is either a utopia or a dystopia. If we're going to make sense of what's happening with literacy in our culture, we need to be able to triangulate: to bear in mind our needs and desires, the media as they once were and currently are, and the media as they're continually transforming.

It's not easy to do. But maybe there's a technology, just waiting to be invented, that will help us acquire this invaluable cognitive power.

Howard Gardner teaches cognitive psychology at the Harvard Graduate School of Education. He is directing a study of the ethical dimensions of the new digital media.


Now, far be it from me to argue with the esteemed Professor Gardner, but I must conclude by saying that I do not share his optimism about the future of literacy.

While I believe that a significant minority will retain reading and writing skills, I think that minority will be a combination of an affluent elite, for whom literacy will be a luxury item, and certain vocational groups, for whom literacy will be a requirement, say computer programmers. But for the latter vocational literates (Eric Havelock used the term craft literacy), they may only use reading and writing for utilitarian purposes, not to obtain culture, entertainment or enlightenment.

The same short attention span that Gardner writes about in his piece will, in my opinion, drive people away from the written word altogether, and toward the visual. And while images can never entirely replace words, speech recognition and speech synthesis software will go a long way towards making literacy unnecessary for increasingly larger numbers of people.

Anyway, that's just my op-ed rebuttal, for what it's worth (about a buck forty-nine, I figure). Believe me, nothing would make me happier than to learn in no uncertain terms that I'm wrong and Howard's right.

Microsoft...Television?

In chapter five Charles U. Larson comments on the impact of television on the implications and predictions of virtual reality. Aside from the virtual aspects of it, the influence that television can have on our individual and social needs in the future is something that Larson believes is worth stating. Although I'm not sure that I agree that television is a personal and social necessity, I do think that the television, as compared to the computer, is a more accessible means to enter a true virtual reality. The FCC recently passed a law which raises the standard of all TV transmission to be digital rather than analog. Analog transmissions are over the air and take up more space than cable or satellite transmissions. Eventually this will lead us into a standard of television where HD TV is the standard for watching all television channels.

In my own personal opinion, HD TV is the most stimulating type of virtual reality. The sound and picture quality of HD is the the most realistic way that I can explore a foreign country or feel like I am actually at a sporting event. So my next question is what if we could experience the virtual reality of HD not only as television entertainment, but also in a social network? I think Microsoft MediaRoom has started to try to answer this question.


MediaRoom
is Microsoft’s lastest technology in Microsoft TV IPTV. It is a comprehensive television, picture, music, gaming, and social network. The television is the highest technology of current TV, including picture-in-picture which allows multiple screens to be watched simultaneously. Many TVs have this feature, but for certain shows MediaRoom can watch the same program from multiple angles. For example Nascar fans can watch the entire race and the in-car camera of four or five of their favorite drivers simultaneously. MediaRoom is also highly compatible with X Box 360 software and ultimately plans to incorporate the X Box into the MediaRoom system, using the X Box headset and controller to go shopping in virtual reality. This will allow MediaRoom users to talk to a sales representative who will guide and recommend virtual shoppers by bringing up different products for them on their screen. This would truly create a virtual shopping experience, more so than online shopping can. Far and away the most virtually realistic part of MediaRoom is the social networking. Microsoft takes the high definition environment it has created that allows you to store your pictures and music, and allows you to share and exchange these with other people. Looking forward, the MediaRoom also contains a program called Microsoft Mediaroom Application Development toolkit, where you will be able to create your own programs for the system which run on Microsoft media platform. This toolkit will aid in the advancement of MediaRoom's technology by opening the suggestion box to anyone who is creative enough to try it.

References:

Microsoft's MediaRoom Tests the Grounds Beyond IPTV's 'Walled Garden'


Wednesday, February 13, 2008

Focus the Nation


Focus the Nation is a relatively new web-site which stresses the need for college students to get involved with preserving our earth. I was checking out articles on the N.Y. Times website when I found this article called "Changing the Climate on Campus." The article discussed how Focus the Nation is trying to make a change among college students all across the country. They even wrote about a teach-in at Fordham University. They had an organized daylong series of lectures on the environment, ranging from the restoration of the polluted Bronx River to the ins and outs of international climate treaties. At Fordham, 19-year-old, Thomas Zellers, helped organize the Focus the Nation teach-in said, "Attendance at the teach-in there was a bit light." Zellers noted that drafting college students into a political movement on global warming — or almost any issue — can be an uphill battle. "Still, I think this is inspiring people," he says. "Everyone has a stake in this. Above all else I think this will be the defining issue for us."

Do not be afraid to get involved! Be Conscious! Check these sites out!

http://www.time.com/time/health/article/0,8599,1711450,00.html
www.focusthenation.org/greendemocracy_whatsnext.php

Tuesday, February 12, 2008

Progress from the Computer Chair: A Look at Larson's Piece

Charles Larson's piece, Dramatism ad Virtual Reality: Implications and Predictions focuses on intrapersonal communication. As discussed in earlier blogs about Larson's chapter, he states the usual concerns and criticisms over Virtual Reality and the threat it poses to genuine human interaction, "virtual reality almost by definition limits the sense of community that accompanied earlier media" [115].
It has been shown time and time again that the fears we have of our tendency to increasingly immerse ourselves in digitized worlds of communications are grossly exaggerated. One does not need to look hard at examples such as SecondLife or Facebook for glaring contradictions to these fears. One claim made by Larson is however, interesting, or at least worth thinking about again. Looking beyond the dramatized fears of people losing complete touch with those around them and their physical surroundings, the point that though we might not become engulfed with solitude and anti-social behavior does not concede the fact that we might become really lazy.
"Virtual Reality also raises serious questions regarding our social order such as concern over the degree to which humans might vegetate their lives way playing in a virtual amusement park, never engaging in productive efforts" [115].
While this claim by Larson is probably shaped in extremes, it does bring up the point that while we're not completely useless are we limiting our productivity to a noticeable level? I Can't help but recall countless night of procrastination and stress being facilitated at the hands of FaceBook and YouTube. The ability to connect with others or help society is not being questioned here, I'm just making the point that Larson is right in the sense that a good amount of productivity is being sacrificed at the hands of mindless web surfing. Bringing this back to Virtual Reality, I can only harp on the infamous South Park War of World Craft episode, mocking as it may have been, the observation was there; the characters in their quest for "progress", physically grew fat, slobbish (to the point of incontinence) and idle.
There ARE a lot of benefits to virtual reality, I'm not denying it, but beyond the utilitarian and humanitarian example you can't ignore that a lot of this industry is developed for recreation, and part of what fosters it is immersing yourself in it for hours at a time. The fear isn't in not talking to the people around you, or living in the real world, it's not doing anything with your life.

Calling all Philosophers(2.0)

Bolter gives an analysis of the conflict between words and images and reviews philosophical theories of the self as they relate to virtual reality. Any attempt to relate the virtual self to ancient philosophers like Descartes or Plato (in Zettl's piece) will fall extraordinarily short in explaining this new dimension of living.

The shadows in Plato's cave that the people in the cave grew up believing in project images that mirrored reality; these images appeared to those who had only seen shadows as reality. But we live in a world now where the shadows are becoming more and more real; BUT we know they are fake, unlike Plato's cave dwellers. Descartes ideas about the self and issues concerning the body and the mind do not concern the 21st century man. We are capable of creating a digital body as a reflection of the mind as well as the body.

Its a new world, and we need new philosophers to interpret the implications that the digital world will have on our generation and those following. Not even the most pensive of academics during the Renaissance or during the Age of Reason could interpret the new technologies impact, so the new academia must stop trying to tie ancient philosophy to the digital universe. Ask the youth of the nation what implications these new technologies have on their lives, because it is their lives that have been most deeply impacted by the new digital world.

Push The Wii To The Limit: VR Head Tracking

The history of Virtual Reality begins with Aldoux Huxley, who invented the feelie experience. These "feelies" were obviously not advanced at this time, however the idea was to incorporate all of the senses. That is, one could feel sensations of touch, taste, smell, and even sexual experience (not a sense but pretty cool). From this original model, virtual reality did not begin to truly take off in this "feelie" manner. According to Charles U. Larson in Dramatism and Virtual Reality, this idea of "feeling" a virtual world and the idea of cyberspace was immediately clenched on to by not just video game industries, but the Pentagon and Pornography industry as well. Clearly, all three could have infinite fun with a world where people can escape their limited, physical bodies, and then complete their inner goals and desires. Multi-million dollar projects in places like Disney World captured similar interactions between human beings and the VR worlds around them. It was noted during the infant stages of VR that the potential of this medium would not be realized until years down the road, much like what occurred with the breakthrough of the television medium.

Well, here we are and now is the time where technology can provide the uses of virtual reality that Larson has predicted. Ideas like "virtual surgery", "virtual driving lessons", "virtual clothing", "virtual vacations", and "virtual sex" all seem possible and applicable to today's society for a myriad of purposes. It seems that one necessity before people can begin to indulge in the experience stated prior is an easy portal into the virtual world, one that doesn't constrict or nauseate the peron attempting to cross back and forth between dimensions. This is very necessary if we were to finally "bridge" the gaming world most play in now with an unbounded, virtual world. What we need is to turn our computer screen into a window, rather than turning our eyes or "windows to the world" into computer monitors themselves.

Larson refers to this difference in virtual thinking as a VR participant not being on the outside looking in, however they are on the inside looking around. The more we look around the more we see, thus making the VR world both infinite and incomplete. Playstation 3 and XBox 360 are right now becoming the most popular consoles in the gaming world, however it seems the Nintendo Wii will be the first system that can truly take a gamer to a new scene. Watch this video and see how if the Wii is taken to the next level you may find yourself smack dab in the world of virtual reality and the future of gaming.

Zettl's Take on Virtual Reality

In Zettl's article on the advent of virtual reality, the technology is analyzed in terms of its artistic parallel to traditional painting and its capabilities when measured against media like television and the motion picture. Analyzing virtual reality in terms of media aesthetics, specifically the realm of three dimensional tehcnique, Zettl asserts that there is relatively little difference from past works of Renaissance art and the digital creations of generated through computer graphics. Zettl makes the claim that many artistic approaches are used in both formats such as perspective, with the only true difference coming from the computerized precision that the artistic eye can perceive. Additionally, virtual reality expands on the form of television, which captures the reality of motion and sound, by allowing perspective and point of view to be controlled and altered in a manner more accurate and befitting of reality. While enhancing on real life however, Zettl maintains that the power of virtual reality lies not its ability to duplicate but go beyond the scope and characteristics of reality. This statement is reinforced by Zettl's claim about HDTV, whose standard is inhibited through its tendency to depend on the motion picture. 
Zettl's piece is interesting because, aside from the usual remarks made about coming technology, such as concerns, uses and the ethics of it all, interesting and new points are made. Zettl refreshingly sees the ability of virtual reality to not merely enhance or duplicate the characteristics of reality seen in photography and television but go beyond them. Perspective, distance and angle are merely a few characteristics, which VR expands upon in such a way, where they are no longer reminiscent to what we compare them as in real life. Zettl's originality is reinforced by the way in which he does not discuss the past with superiority or the past with disdain, points are both thoughtful and objective, making them even more insightful. 

Interpersonal Relationships in VR

In Charles U. Larson's Dramatism and Virtual Reality: Implications and Predictions, he voices the same concerns as critics of virtual reality. By emphasizing the intra-personal nature of virtual reality simulations, Larson is neglecting some of the most important aspects of a virtual reality network. The greatest ability of virtual reality is connecting people in deeper manner than message boards and emails. Through an extension of a virtual body, people are more willing and more able to make emotional connections with others.

While there are more impressive simulations available, the most popular virtual reality is World of Warcraft. I don't think I'm stretching the definition of the word by calling WoW a virtual reality environment. So why is World of Warcraft so popular? Because it's fun and it allows people to make real connections without needing to be in the same time zone or zip code. And WoW does this without the need for headset glasses or touch sensors placed all over your body. Nor does World of Warcraft display ultra-realistic graphics or physics resembling the real world. Yet, because of the real players behind their avatars, the game presents a realistic, humanistic map for one to explore.

My point is that virtual reality won't appear like in the Matrix. We won't "jack in" one day to this virtual environment. Rather, through slow and deliberate modifications to existing technologies, the world will become more digital and more virtual, but do so invisibly to most people. We will continue to augment our reality with virtual environments. And the most important use of virtual reality will be in presenting an environment to interact in and connect through without the need for physical or temporal locality.

Just a thought

In an article I recently read about realism in videogames, the author brought up the point that a simulation does not only refer to how an object in a virtual world is represented, but in its behavior as well. He gave the example of a car engine not just looking like a car engine in a virtual world, but also behaving like an engine. Herbert Zettl, in “Back to Plato’s Cave: Virtual Reality,” states that the goal of computer graphics is “not to simulate the real environment, but the lens-generated one” (107). So I guess, in a sense, it simulates the behavior of a wide angle lens or a telephoto lens, as he explains, but what about things that are more than just the physical appearance? How would virtual reality be viewed if someone threw an object and the virtual environment followed the laws of physics? Many videogames already incorporate physics into their game engines to make their games more realistic. What I’m wondering is how realistic does virtual reality have to be to be considered realistic enough for the possibility of someone getting too caught up in it and then being unable to distinguish the real from the virtual world. Would it be worth following the laws of physics or any natural laws in a virtual reality world. Or how much importance should be put into creating a world that simulates more than just the aesthetic aspects. Though I suppose it is more important to put an emphasis on the ethical issues that could arise in the “perfect existential world” (108) since it leaves so much freedom and room for experimentation. A bunch of computer games whose graphic quality isn’t as realistic as some are now already have been presenting audiences with moral and ethical decisions. I guess it’s only natural for people to worry about this since technology is making virtual reality more realistic every day.

I love to Blog and the N.Y. Giants

Recently, I have found the blog has become one of the newest influential mediums on the internet, because it provides a real personable approach and can help you get a better understanding of the writer. I like how anyone can write subjectively on any given subject. Unlike a magazine or newspaper article, in which, the authors most often write similar as well as completely disregarding your name; (except maybe the N.Y. Times) the blog shows the true personality of the person. I love it how you can post pictures, videos, articles without the permission of the rightful owner. I believe the freedom of the blog has helped create another form of communication that will continue to prosper over the years.

It provides the ability to reach a wide-spread and global audience to check out your writing style whether you choose to be formal or informal. Obviously, political blogs have to maintain a degree of professionalism, but they help you get a better understanding of their true opinions and beliefs. On the other hand, blogs from the general public have become increasingly popular because you get to write how you feel. It is completely subjective, and as our teacher noted it is even offensive to comment on one’s grammatical errors. I love to blog. I do not have write with rhetorically persuasion, but I am going to throw out my thoughts out there and let them grow on you.

P.S. This is for all you who doubted the N.Y. Giants. Sorry, I missed class last week but for a true Giant fan this was one holiday that comes around as often as a swarm of locusts. Also, Steve Spagnuolo signed back with the Giants for another three years as the defensive coordinator, because he said there was no better city in the world to come home to for a parade! Nonetheless, I think the ‘Redskins suck’ chants might have helped which I screamed at the top of my lungs. Sorry, Jimmy Page, but defense wins championships and ruins perfect seasons too.

Are We Getting Ahead of Ourselves



Here a soldier demonstrates the Virtual Reality (VR) parachute trainer, while Aviation Survival Equipment man controls the program from a computer console. Students wear the VR glasses while suspended in a parachute harness, and then learn to control their movements through a series of computer-simulated scenarios. The computer receives signals from the student as they pull on the risers that control the parachute. The VR trainer also teaches aircrew personnel how to handle a parachute in different weather conditions and during possible equipment malfunctions. Navy and Marine Corps aviators receive state of the art training at the Naval Survival Training Institute. U.S. Navy photo by Chief Photographer’s Mate Chris Desmond. This is why they do it this way. It saves lives, money, gas for the jets and ect.


I found it interesting that Herbert Zettl of chapter five referenced a quote from Socrates and even named his chapter 'Back to Plato's Cave'. He writes about the philosophical and ethical implications of virtual reality on human action relating it a passage from the "Republic". As we continue to expand our technology it is hard to recognize when we will make a mistake or have pushed the limit too far. Although, I support the virtual reality program for teaching soldiers how to fly, maybe we can direct this energy and expenses toward creating a more positive outcome rather than training soldiers of war. One of the main points in the analogy between Plato and the reading was whether it would be better to stay in a cave remaining in a world of shadows or whether it is better to expand the boundaries. I support stretching the boundaries, but it can be complicated in certain situations when VR has been used to help one's evil actions. For example, the story of the suicide bomber in 9/11 who learned to fly a plane through a VR program. Although we can not prevent cases of people making bad decisions, the question when do you limit the expansion of technology?

Grand Theft Auto in Plato's Cave

Herbert Zettl's and Charles Larson's chapters in Communication and Cyberspace focus on virtual reality and how it can effect the human understanding and even human behavior. Zettl states, "...virtual reality provides a perfect existential world, in which we can exercise free will and make any number of decisions, however extreme, without ... the underlying anxiety of accountability." [Pg. 108]. While things like Second Life or simming have not moved beyond very basic levels, Zettl's vision of "...operating in an amoral environment...whose virtual character liberates us from feeling any existential angst when making choice" [108] can be seen in video games like Grand Theft Auto, Bully or World of Warcraft. In these games, players are able to choose missions to take on which can run the gamut from moral ambiguous to downright evil and dirty. Many a researcher has tried to argue that these immersive games with questionable moral ground have an effect on children and violence, though most research into the subject matter has not revealed a strong argument either way. In fact one could if the game invites anyone to act out in the way in which they choose.

Larson points out that "In objective reality, the agent must choose to act and follow or not to follow the path implicit in the scene. In virtual reality, the interactent must decide to look left, look right..." [119, sic]. In a virtual world you decide on a path and take it, it frees you from a moral sense of guilt/angst over, say, taking out a mob hit man or blowing up a car in a video game. The player is not choosing the past for them self, they are just deciding on what makes the game more interesting. To Larson, VR is influenced by the setting: A mob game will make you think like a mafia don and act thusly, a game about solving puzzles will make you think analytically, etc.

Larson and Zettl illustrate how virtual reality is a great tool for training, and even gameplay, but that there are still moral issues that needed to settled. If cyberspace remains an amoral place of escapism, how will that, in turn, change us?

Tuesday, February 5, 2008

Physical Cyberspace

I discuss the concept of physical cyberspace, as opposed to perceptual and conceptual cyberspace, in the Introduction to Communication and Cyberspace: Social Interaction in an Electronic Environment, edited by Lance Strate, Ron L. Jacobson, and Stephanie Gibson and the following article from Technology Review discusses a dramatic example of this material foundation of our shared sense of space.

http://www.technologyreview.com/Infotech/20152/?nlid=854



Tuesday, February 05, 2008
Analyzing the Internet Collapse
Multiple fiber cuts to undersea cables show the fragility of the Internet at its choke points.
By John Borland

When the Internet suddenly collapsed early last Wednesday across the Middle East and into India, it provided a stark reminder of how the Net's virtual spaces can still be held hostage to real-world events.

Almost simultaneously, two separate undersea fiber-optic cables connecting Europe with Egypt, and eventually with the Middle East and India, were cut. The precise cause remains unknown: experts initially said that ships' anchors, dragged by stormy weather across the sea floor, were the most likely culprit, but Egyptian authorities have said that no ships were in the region.

Whatever the cause, the effects were immediate. According to its telecommunications ministry, Egypt initially lost 70 percent of its connection to the outside Internet and 30 percent of service to its call-center industry, which depended less on the lines. Between 50 and 60 percent of India's Net outbound connectivity was similarly lost on the westbound route critical to the nation's burgeoning outsourcing industry.

"This [fiber path across the Mediterranean] is a choke point, which until recently was a very lightly trafficked route where there wasn't great need for cable," says Tim Strong, an analyst at telecommunications research firm Telegeography Research. "There are many new cables planned for the region, but as it happens, they're not in service yet."

Undersea cable damage is hardly rare--indeed, more than 50 repair operations were mounted in the Atlantic alone last year, according to marine cable repair company Global Marine Systems. But last week's breaks came at one of the world's bottlenecks, where Net traffic for whole regions is funneled along a single route.

This kind of damage is rarely such a deep concern in the United States and Europe. The Atlantic and Pacific Oceans are crisscrossed so completely with fast fiber networks that a break in one area typically has no significant effect. Net traffic simply uses one of many possible alternate destinations to reach its goal.

Not so with the route connecting Europe to Egypt, and from there to the Middle East. Today, just three major data cables stretch from Italy to Egypt and run down the Suez Canal, and from there to much of the Middle East. (A separate line connects Italy with Israel.) A serious cut here is immediately obvious across the region, and a double cut can be crippling.

The two damaged cables, both cut about five miles north of Alexandria, Egypt, are the most modern of the trio. One, owned by the U.K.-based Flag Telecom, a subsidiary of the India-based Reliance Group, stretches nearly 17,000 miles from Europe to China. The second cable, known as Sea-Me-We 4 and owned by a consortium of 15 different telecommunications companies, stretches from Spain to Singapore. Together, they have a capacity of close to 620 gigabits per second, according to Telegeography Research.

The one remaining cable traversing roughly this route is the older Sea-Me-We 3 cable, which has a capacity of 70 gigabits per second--considerably less than its newer rivals.

A third regional cable, also owned by Flag Telecom, was cut the morning of February 1 off the coast of Dubai, in an apparently unrelated event. This break has caused less trouble, since it is part of a Middle East loop that offers alternative routes for data traffic.



A map of the fiber-optic cables crossing the Mediterranean, connecting Europe with Egypt, the Middle East, and ultimately India. The Flag Telecom Europe-Asia and Sea-Me-We 4 lines were cut last week just north of Alexandria, Egypt.
Credit: Telegeography Research

The cause of the cuts in the two main broken cables remains somewhat mysterious. A spokesman for Flag Telecom said on Monday that the company would not speculate on the causes until the broken line has been examined. However, Egyptian telecommunications officials said on Sunday that no ships had crossed the site of the breaks in the 12 hours before or after the incidents on Wednesday. The site is also a "restricted area," further lessening the chances of a ship's responsibility, the ministry said.

The unexpected collapse in service forced Internet providers across the region to scramble for alternative connections, most using backup bandwidth sources under contract for just such an emergency. Many ISPs began switching traffic east instead of west. Data from India to Europe might thus first pass through East Asia, across the Pacific, through the United States, and across the Atlantic Ocean before reaching its destination. While slowing traffic, in some cases significantly, this at least allowed data to get through.

According to ISP Association of India secretary R. S. Perhar, service providers in his country adapted to the cuts relatively quickly. Traffic from business customers was given a top priority on networks, with consumer traffic taking second place. Three of the country's largest service providers weren't affected at all, since they weren't buying bandwidth from the Flag or Sea-Me-We 4 cables, he says.

Many other Indian companies had diversified their network connections following December 2006, when an earthquake off the coast of Taiwan severed seven major undersea cables that served India as well as East Asia. But some providers who had not acted as quickly found themselves cut off entirely, Perhar says.

"Most have done good network planning and made sure they get bandwidth from several service providers," he says. "But there are people who did not have redundancy in their networks."

Outsourcing companies also found themselves facing potential disruption. With so much outsourced work now being performed in India or elsewhere in the region, companies in the United States and Europe are increasingly dependent on these broken lines for their everyday business. But like the ISPs, the biggest outsourcing companies said that they relied on redundant connections to ensure the flow of data.

"We have planned for circumstances like these," says Nathan Linkon, a spokesman for Infosys, a large Bangalore-based outsourcing company. "We have diversity in path and providers, and we haven't lost any connectivity to our offices or customers."

With just two cables at issue, restoring service is expected to go more smoothly than did the 49-day process required after the Taiwan earthquake. Flag Telecom has told its customers that a repair ship that launched from Catania, Italy, will arrive and begin work today. The company said that Egyptian authorities are "expediting the permits" so that work can begin as soon as the ship arrives.

These repair operations have become fairly routine, with marine service companies on call around the world to launch a ship as quickly as possible when a nearby cable has been torn by a ship's anchor or fishing net, or, more unusually, by a natural event such as an earthquake.

A repair ship will typically take several days to reach the site of a break, says Stephen Scott, commercial manager for the U.K.-based Global Marine Systems, which is not involved in fixing this week's break.

A ship will locate the break in the line, sometimes by using a remote-controlled submarine device that can send signals up and down the cable, Scott says. The cable is then cut entirely at the break, and the little sub brings one half to the surface. Alternately, some operations simply use long grappling hooks to grab the cable.

Once the first half is brought to the surface, the crew splices on a long segment of replacement cable. The first half is let back to the sea floor; the other broken half is brought to the top, and the other end of the replacement cable is spliced on.

Unless the seas are rough, this double-splicing operation can take about 20 hours from start to finish, Scott says.

In the wake of the fiber breaks, Perhar says that his organization is encouraging ISPs and companies dependent on fast connections to continue diversifying their bandwidth sources as much as possible, and to lobby for new cable to be laid.

Telegeography Research counts at least four new fiber lines planned for the Europe-Egypt route over the next few years, including another by Flag Telecom, one by Telecom Egypt, another by the Egypt-based Orascom Telecom, and a fourth funded by the India-Middle East-Western Europe consortium of companies.

But even these will all use roughly the same route, says analyst Strong. That will keep this Mediterranean zone a "choke point" worth watching.

"With more cables, it's getting better over time," Strong says. "But there will still be a lack of physical, geographical redundancy. That is something of a concern."

I would just add to this that satellites are also part of physical cyberspace, and knocking them out of orbit would also have an effect on the internet.

Really? Online Web Ads are like Art?

Paglia brings up an interesting point of discussion in the chapter on Online publication. Writing on the Internet, whether Newspaper, blog, or an online magazine is often undermined if not underdeveloped. However the point of view which Paglia brings about these points are a little questionable to me. Since the whole ads as art thing has been brought up numerous times on the blog i'll leave that one alone, but would also point out that its a little far-fetched and not in the same league as a Warhol lithograph. 
Firstly, I would start off in agreement of Paglia about the scope of opinion and refreshing dialogue online writing can create in a world that is hyper sensitive to the daring articles, writers sometimes attempt to make in the traditional world. One would be ill to attempt to argue what the marketplace of ideas promoted by business did to the Internet, especially when considering its humble beginnings. Furthermore, the scope is one that is best measured by its impact on our youth (look at this class), Paglia's point about the futility of newspapers in our world cannot be denied. Newspapers are struggling, as evident by the bombardment of free papers we encounter at every subway stop in the city. If the Internet can strengthen the limping newspaper industry than Kudos. 
That being said, I do disagree with some points. Many of the pro's Paglia states about online publication such as its instantaneous ability to be updated, and its visual more abridged format are only benefits depending on how you ask. In response to the criticism made about those "verbose" pieces superfluously written, they are not doing so in the spirit of College papers that are trying to create fillers. Its not like we're paying writers by the word like Charles Dickens, what i'm saying is there is value to these type of pieces, people read them because they enjoy this type of reading and discourse. The simplicity online publication offers isn't bad, but neither is the alternative wordy article. Moreover, the free flowing format of online writing seems more a personal preference of Paglia, who harps on the stream of consciousness-like writing of Ginsberg and Kerouac, and if thats the case, thats not a benefit of online publication but a personal preference. 
Finally, I'd like to reinforce Paglia's own mention of the Internet as "an ever-expanding if still ill sorted and error filled encyclopedia...". An abundant amount of links and sources does not mean something good, too much choice can be just as bewildering and ineffective as not having any choices at all.  

The convenience of hypertext

In Chapter 16, Camilia Pagli discusses a very pertinent subject dealing with print and hypertext. Hypertext allows the reader to quickly find an area of interest just by the click of the mouse. Although books and articles written in print explain material just as well or better, the convience of these hyperlinks s made learning via internet superior. The ultimate example of this would be wikipedia. org. I love wikipedia, and feel that once I have pressed search it is hard to exit the site. This is because you can type in almost anyone, or anything, and something else maybe even a vocabulary word will spark your interest. However, there are some advantages to actual written work. Sites such as wikipedia as well as other search engines, can be edited by whomever feels like it. Although I have noticed false information will rapidly be deleted the fact that printed work is written by only one individual, and cannot be changed once it is published, seems to be more reputable and trustworthy in most peoples eyes. I feel that its a good thing that people will go out of their way to contribute to a site that others will read. The internet is great in the sense that everyone can participate and add something positive. Hypertext was bound to happen sooner or later. Today people are becoming less and less patient with all the technology coming out in recent years. People want instant gratification, and hypertext is allowing that to happen while trying to find out certain information. Its a whole lot easier than flipping through an index, or searching through a five inch thick dictionary, why not go to dictionary.com ?

New Privacy Concerns About Facebook

From the Chronicle of Higher Education:

http://chronicle.com/free/2008/02/1489n.htm


Monday, February 4, 2008


Study Raises New Privacy Concerns About Facebook

By JEFFREY R. YOUNG

Undergraduate researchers at the University of Virginia say that Facebook's application platform, which allows anyone to create plug-ins that can be placed on personal pages of the popular social-networking service, sends far more personal information than is necessary to the plug-ins' developers.

That means that an identity thief could develop an application to grab personal information using Facebook, says the study's leader, Adrienne P. Felt, a senior majoring in computer science.

Facebook officials argue that their application platform needs to be liberal with users' information to function properly. And they insist that any application developer who creates a malicious plug-in would be denied access to the site because misusing data violates Facebook's terms of service.

Thousands of applications have been created for Facebook since the company began allowing them last May. A typical application lets a user who adds the plug-in to their page share some information about themselves with other users who have also installed the application. One application called Visual Bookshelf, for instance, lets users list books they have read and share their lists with friends.

Even some colleges have joined in, creating plug-ins that, for instance, stream headlines from the public-relations office to users' Facebook pages or allow users to search the library's card catalog via Facebook. A college marketing blog recently listed more than a dozen Facebook applications created by colleges.

To install an application to their profile, users must check a box that says: "Allow this application to know who I am and access my information." The site further warns: "If you are not willing to grant access to your information, do not add this application."

But Ms. Felt argues that many Facebook applications do not even need access to most of a user's personal data to perform their functions (an application that lets users search a college library's catalog, for instance, does not need to know a user's birthday or who their friends are), and she is urging Facebook and other social-networking sites to fine-tune their settings to better guard user privacy.

In her study, Ms. Felt examined the 150 most popular third-party Facebook plug-ins to see whether they made use of private information on the users' accounts.

"We found that 8.7 percent didn't need any information; 82 percent used public data (name, network, list of friends); and only 9.3 percent needed private information (e.g., birthday)," Ms. Felt wrote on a Web site about the research.

She said in an interview that she did not know of any Facebook application developers who had misused private information, but she argued that "if this hasn't happened already, it will."

"I would recommend that people think twice before installing some random application," she added.


Protection From Plug-Ins



Facebook officials defended the company's policies.

"By limiting developers' access to user data, Facebook would be limiting the types of useful applications that can be built," said a representative of Facebook, who spoke on condition of anonymity because she is not authorized to talk to reporters.

The representative, in an e-mail interview, pointed out that users do have the ability to fine-tune some aspects of how applications access their data. Those settings are somewhat buried, however. (To get to them, users must go to the "privacy" section of the service, and then select the "profile section.")

"Obviously, privacy and security are a huge priority for Facebook," she added.

B.J. Fogg, director of Stanford University's Persuasive Technology Lab, co-teaches a course at the university about developing Facebook applications. He agreed that many applications can see more user information than they need to. But he argued that the risks of using Facebook applications are minimal. "Like most things in the world, it is a trade-off, and the risks are low compared to the benefits," he said.

Even if a malicious application developer could snag all of the information from someone's Facebook profile, they probably wouldn't have enough to do anything terribly damaging with the information because the site doesn't store social-security numbers or other sensitive data, he said. "I can't come up with a really terrible story" or worst-cast scenario, he said. Facebook has a high incentive to strictly enforce its policies and ban any abusive applications that might pop up, Mr. Fogg said.

He also argued that most users of the social-networking service were aware that the applications they installed could monitor their information. "Facebook has this ethic of openness, and if you're on Facebook, there are certain things you share with other people," he added.

Most Internet users these days seem far less concerned than Ms. Felt about the information they share online. In a survey conducted last year by the Pew Internet & American Life Project, 61 percent of respondents said they did not feel a need to limit the amount of information that could be discovered about them online.

"By and large, people aren't worried about the personal information about them that's available online, which is striking," said Mary Madden, a senior research specialist for the project.


This is not the first time that concerns about privacy have come up in regard to Facebook. The bottom line is that it is a mistake to consider anything private that's online.