Below is the link to our Project 5 Podcast about Notre Dame's Computer Science Curriculum:
drive.google.com/a/nd.edu/file/d/0BwGQeBGbIySPVDhUdEt6MmZJbHc/view?usp=sharing Podcast by Luke Garrison, Nick Ward, Anna McMahon
0 Comments
Below is the link to our podcast. You can access the podcast as long as you are using an ND Google account.
I am torn on whether patents should be granted in general. On one hand, they provide the economic incentive to invest in the research required to create new, innovative products/technology. However, on the other hand, they can lead to patent wars between corporations and seem to inhibit innovation. Patent trolls are certainly a problem, as they often have no intention of manufacturing or utilizing the patent, and are instead interested only in their legal footing of monetizing their patent by suing other companies. While they are able to do this legally, it hinders innovation and is not beneficial to society in general. Their existence is only evidence that the patent system is not working, because the fine print of patents is being manipulated for financial gain in ways that it was not intended. This is the main problem with patents: sometimes they are beneficial to society and sometimes they are not, and it is difficult to know the difference until after the fact. According to the US Constitution’s Copyright Clause, Discoveries as well as Writings can be secured for a limited time to promote the Progress of Science and useful Arts. Discoveries seems to imply that ideas are copyrightable, which means that software would be patentable (more the ideas than the implementation). For example, if a company spends time researching an advanced machine learning technique that improves results, it seems like the company should be able to patent this is a reward for its investment, just like any other non-software company. However, when things like the Apple vs. Samsung patent wars happen, it makes me re-think how effective patents are and whether they are actually hindering or incentivizing research and progress.
Part of the reason Tesla decided to give up its patents is that they were standing in the way of the company’s ultimate mission: to increase the number of electric vehicles and combat the carbon crisis, as explained in “All Our Patent Are Belong To You.“ However, most companies do not have such a clear, specific mission that benefits society as a whole, so Tesla’s reasons for forfeiting their patents do not apply. There are a few motivations behind developing self-driving cars. While some are motivated by the potential to improve the environmental impact of our vehicles or to lower the cost of using vehicles via a shared-car economy and others are motivated by the elimination of the human error in driving and the potential for increased safety, I am most excited for the convenience factor. In many ways, time is invaluable, and unfortunately for millions of people, a substantial portion of their time, week in and week out, is spent traveling in their car to work. What if they didn’t have to pay attention to the rode, and instead could take care of some shopping, answer emails, or take a much needed nap as their car safely drove them to their destination? I cannot help but be excited by this possibility. Unfortunately, things are not so simple in practice. While the aforementioned reasons are strong reasons consider self-driving cars, there are a number of reasons not to. For example, autonomous vehicle technology will put the US’s largest labor force out of work, and could cripple the economy with a ripple effect from truck drivers to those employed by the business truck drivers bring when they stop. Tech Crunch’s “The driverless truck is coming, and it’s going to automate millions of jobs” article shares many of these same concerns. The issue is that 1.6 million people suddenly without a job is not an easy problem for our country, and there is no perfect transition for them. Many have also voiced valid safety concerns, such as whether the vehicles can truly handle heavy snowfall and precipitation, as well as problems with transitioning from today’s vehicles to fully autonomous vehicles. The problem is that when vehicles are partially, but not fully autonomous, people tend to trust the technology too much, and put their lives in danger, forgetting that it only takes 0.1% of the time to get themselves killed. This is why Google decided to go all out with autonomous vehicles and has removed traditional user controls such as pedals and steering wheels.
Lastly, autonomous cars have struggled with, and continue to struggle with philosophical and ethical issues surrounding “who to save” in specific, no-win scenarios. Some of these are listed in “The social dilemma of autonomous vehicles” by Science Mag. Manufacturers, software engineers, governments, and users must decide how to handle this issue: save the passengers, or maximize the social good and social utility. Because this is complicated ethical issue, I believe a good solution might be to allow those using the vehicle to customize their vehicle’s behavior to match their own moral and ethical compass (within reason). With self-driving cars, I believe that the software companies would have to be liable for accidents since there could be no human driver. While there would certainly be a sense of loss if I lost the ability to drive my car, I think it would soon be outweighed by the value of saved time and lessened stress with each trip in a self-driving car. The field of Artificial Intelligence was named so deliberately: software is written to create the appearance of intelligence, even if it is not human-like. The vast majority of AI is not human-like intelligence. Instead, the software seeks to simulate human-like behaviors, responses, or recognition. However, the reason that the intelligence is artificial is it has not flexibility. It can only understand what it is trained to understand, and really nothing more. For example, AI could be used to train a model that uses team and player statistics to predict football games, but that doesn’t mean the model can also be trained for baseball games. More importantly, the way the program makes predictions and “learns” is vastly different from how humans do. Much of AI has to do with mathematics and statistics rather than a true understanding or an authentic intelligence. Another key difference is that machines do not have a sense of consciousness. They can be programmed to have the appearance of consciousness, but they are not sentient. They may be able to perform specific tasks, but they can’t truly decide things for themselves or answer the question “why”.
AlphaGo is certainly a form of Artificial Intelligence, but it is not extremely useful. It can strategize and plan moves by learning from past experiences. But improving this form of AI does not put us at an increased risk of having machines plot to take over humans, despite the dark predictions in “The A.I. Anxiety.” Self-driving cars are a more advanced form of artificial intelligence because they must view their environment, interpret their surroundings, and take the best and safest course of action accordingly. However, even still, self-driving cars are responding to training and pattern matching rather than truly understanding how to drive, even though the end result is similar in this case. I cannot currently see computing systems being considered a “mind”. Instead, I see A.I. systems being considered tools that can process large amounts of data to make more informed decisions and recognize patterns. At least for the time being, I am more worried about the power of A.I. in regards to data mining of my information than I am about machines becoming sentient and plotting to overthrow humans. As explained in “Debunking the biggest myths about artificial intelligence,” AI is not on track to, and does not have a goal to create machines that can think (as we do). Hopefully these machines continue to augment our intelligence without people assuming they are replacing it. With the issue of censorship, there are two different aspects to consider: censorship by the government and censorship by companies. I don’t believe the government should censor anything (nor do they have the right to because of the first amendment) unless it falls under the libel and slander laws. It is simply not the government’s place to do so, and it would be far too easy to suppress people who didn’t have the majority’s political views. Because of this, I argue that allowing governmental censorship would be unethical.
Censorship from companies is another issue though. Companies, especially tech companies, are different because they provide services that we as consumers can voluntarily utilize. The company certainly cannot force us to use these services, and if we stop, so do some of their data points. When tech companies are smaller, censorship isn’t really an issue because so few people are affected. However, when companies such as Facebook, Google and Twitter have such a wide reach over the world population, they become a part of our daily lives and we begin to depend upon them, just like everyone around us. We begin to trust these companies with our information and to give us information. But what happens if they intentionally remove content so we can never see it in the first place? For example, as explained in The New Censorship by Robert Epstein, Joyce Bartholomew’s politically conservative, pro-life music video was removed from YouTube. The vast majority of users would never have even known it existed. Does YouTube have the right do remove this video? Absolutely. As “How Facebook Censors Your Posts (FAQ)” explains, these companies can do whatever they want – the first amendment applies to government, not to them. It is their platform, and you gave them the power by using their services. However, do I think it is ethical for them to remove political content? No. People should be able to make their own decisions and form their own opinions, and should not have to be tricked to alter their political views. Aside from the ethical argument, I believe it is bad for their business. If word gets out that people’s content is getting removed when it probably shouldn’t people will start to get upset and lose trust in the company. Political content can give users the chance to grow and be challenged rather than being shielded from certain opinions or new stories. However, there is absolutely nothing to be gained for anyone by allowing terrorist propaganda to float around. It is ethical for companies to remove this sort of content because the only people it helps are the terrorists. That is certainly not to say that they have to, but that since they have the power to, they should use it to remove such incitements of violence. If it is news about terrorist activities, on the other hand, then it is not a bad thing leave that uncensored, since it is the truth and reminds people of the evil that exists in the world. In general, I am against censorship by anyone, as I want people to be able to access information and make their own judgements and decisions. I would not say that encryption is a fundamental right. Instead, my stance is that the government has no right to prevent me from utilizing encryption. Saying someone isn’t legally allowed to encrypt their data is like saying you can’t ever have a secret that others don’t have a right to know. I don’t believe anyone is entitled to read anyone else’s data unless the owner gives them permission. If they have sensitive documents on their computer that, if leaked, would cause their business secrets to be leaked, is it fair that they can’t utilize methods such as encryption to protect their data from falling into the wrong hands? Google, for example, has buses for employees with VPN’s that encrypt all data that is sent over the network. They do this so that no one can spy on the network and steal valuable information. I think it is perfectly legal for them to do this, because it doesn’t harm anyone else and it is not anyone else’s business in the first place.
Do you believe anyone should have access to open the mail that you send to people? While the government has the power to open your mail under extreme circumstances without a warrant, no one else does, unless they are willing to commit a federal offence. The same is not true of your data. If someone sniffs the packets your computer sends over the Internet, or steals your hard drive and looks at your files, this is not a federal offence, and it is far more difficult, if not impossible to know that the act has been committed. One good way to protect yourself and your information is encryption. Not using encryption is similar to sending all of your mail, regardless how sensitive, without sealed envelopes. You are freely inviting people to see your information. While the issue of encryption is important to me, it is certainly not the most important political issue. If two candidates were tied on all issues except encryption, then it would matter. But there are many issues that I feel more strongly about and that carry more moral weight. If I were an investor in startups, I would certainly rather invest in company that utilized encryption to keep users’ data safe. At this time, it seems that encryption will win. It is often too time consuming to crack the encryption using brute force. From the government’s perspective, as the FBI demonstrated, they are willing to use all means necessary to attempt to crack encrypted data, but it is by no means guaranteed. I foresee this battle between the privacy of the individual and the fight for the “common good” to continue to rage. Companies want users to be happy and to keep their data safe, but politicians have an interest in fighting for the good of society in order to maximize their political influence. It would be naïve to assume that this would be the first time politicians want people to give up their rights for the sake of the common good. At this time, I see encryption winning, because I don’t think the government has the ability to outlaw it. In the event that political momentum was gained to pass such a law, I would certainly fight to overturn it. As this topic becomes an increasingly hot one, I hope that voters educate themselves about the benefits and concerns of both perspectives and make an informed choice for themselves. Project 3 link: http://lkgarrison.weebly.com/home/project-3-encryption To the editor of the Observer,
We wanted to raise awareness of this issue to you and the rest of the population of South Bend because we believe the implications of encryption, or lack thereof, are very important and urgent. Our interest in this topic has risen in light of the recent debates about the government requesting that Apple provide a backdoor to the iPhone. We wanted to take the time to outline some of the implications of encryption in terms of devices and services that the people of South Bend use everyday. The vast majority of people are not informed about the implications of encryption as it relates to the companies and governments to see the data of its people. We want to raise awareness about this because we believe that people have the right to make an informed decision regarding the issues surrounding encryption. So what does a world with encryption look like? There are benefits of your data being encrypted. If data is encrypted end-to-end, from when you hit the send button all the way through being stored in a database, the data is virtually unreadable. Companies are not able to feed encrypted data into machine learning algorithms or use it for any sort of statistical analysis. With the sophistication of modern encryption, data is nearly impossible to recover. Modern encryption algorithms could take upwards of 100 years to crack, even with maximum computing power. Clearly, preference to a user's privacy is given in this case. Of course, encryption isn’t always end-to-end, such as with the iPhone. The data on the iPhone is encrypted, but not the data that is sent to Apple’s servers. This means Apple’s databases are a separate issue, but Apple’s devices themselves are entirely encrypted, meaning if you read the contents of the storage on the phone, it appears to be nonsense. While this means that the data that is physically on your phone is safe, it also means that any terrorist’s phone is also safe from authorities. Thus, the end user has a decision to make, although right now most consumers are unaware of the issue: do they value their privacy or a chance for the government (and also inevitably, hackers) to access the device’s content in an attempt to bring about justice? Think about this on a grander scale, consider the encryption on machines in the cloud. Think about the implications of companies being able to encrypt the data on their machines in Azure’s data center, for example. When public cloud providers, such as AWS and Azure allow companies to encrypt data on their rented servers, this makes it impossible for the public cloud provider, and thus the government, to gain access to the information. This allows companies to have a lot more privacy for themselves and their customers. So what does a world without encryption look like? A world without encryption would mean that the government would open opportunities for backdoors, such as the one that the government requested from Apple. If the government was to have a way to enter a device in the case of an emergency, there would be a security vulnerability purposefully introduced into the system. This would quickly become a fatal issue once a skilled hacker figured out how to exploit the vulnerability. As is, it is already impossible to keep an internet-connected device secure, even while the developers work tirelessly to eliminate vulnerabilities. A world without encryption, in terms of the iPhone, means that your data is sitting on your device, waiting to be read by anyone with the technical knowledge to do so. A world without encryption also has the implication that companies get to see your data often. This has a lot of side effects, one of them is more relevant ads. When a company has access to any data they can tell a lot about you easily - this means better ads. In fact - Target claims that they usually know of a woman’s pregnancy before her spouse. By using machine learning on a new mother’s purchases, they can often tell that a woman is pregnant. This is just one example, but the point is that from even a small amount of data, a lot can be learned. Should encryption be allowed, and is it beneficial for individuals as well as society as a whole? It often comes down to personal privacy vs an increased ability to prevent further criminal acts. While people will certainly disagree on the topic, we hope that they at least take the time to take an informed position on their own. CNN raises a what initially seems to be a good point in the article “No, the presidential election can’t be hacked” that our election system is very decentralized with a wide variety of different voting implementations, making it very difficult to implement a coordinated attack to meaningfully alter the nation’s election results. Since votes do not go to a nationally centralized server, the task of rigging or hacking the election is considerably more difficult. However, what CNN”s article does not take into consideration is that historically, most states do not influence the outcome of the election, because they almost always tend to vote for the same party. After looking at the map at http://ijr.com/2015/05/317110-election-whiz-reveals-seven-states-really-matter-comes-deciding-president/, one can see that there are only 7 true “swing states”, with a handful of others that lean only slightly to one side of the aisle or the other. This means that a hacker’s target is considerably reduced, from 50 states down to fewer than 15. This isn’t to say that the task is suddenly easy, but hacks wouldn’t need to occur at as large of a scale as you might initially suspect. When combined with the fact that a shocking number of counties and states are using electronic voting equipment that is dangerously outdated and insecure, this invites the opportunity for a malicious attack on the election system. As demonstrated by Princeton professor Andrew Appel in the article “How to Hack an Election in 7 Minutes”, old voting equipment is very susceptible to tampering. The question is what do we do about it?
Is it possible to enjoy the convenience and efficiency of electronic voting and also the achieve improved security and reliability? I believe from a technical perspective, yes. If companies can securely allow online banking and users to pay others from their smartphones without fear of losing their savings account, I don’t see why a reliable electronic voting system is not possible from a technical perspective. The issue is the practicality, especially in terms of costs. In their current state, I do believe that we are leaving our electronic voting system susceptible to hacking, but I think with the right technical solutions, this would not be an issue. While the technical solutions for building a secure CRUD application are very good, that isn’t a solution that works for our voting system, because it needs to be decentralized. The current electronic voting system is not connected to the internet, which is what helps prevent cyber-attacks. I would be much more likely to trust an electronic system that had multiple redundancies and provided a strong audit trail. Otherwise, voters are left wondering, even hoping, that their votes were counted correctly. The situation of lost votes described in Bloomberg’s “The Computer Voting Revolution Is Already Crappy, Buggy, and Obsolete” should absolutely never happen. I think it is ethical for companies to gather your information and use it to sell you products and services, so long as the user understands what the company is doing with the data. This is the difficult part, and companies know that the vast majority of users won’t think for very long about what they may or may not do with the data. However, assuming users know that the company intends to do with their data, I think this is ethical, as users can enter into a voluntary agreement or an exchange of goods/services with the company. People that are willing to give a company like Google information that enables them to see more relevant advertisements (not necessarily a bad thing in and of itself) may believe this tradeoff is worth it in order to access Google’s services.
I think it is easy for people to forget that companies like Google and Facebook do not exist for the sole purpose of giving users exceptional, free products. You aren’t entitled to their products or services. The price you pay to use them is that these companies can collect data about you and sell advertisements targeted at you based on this data. If that is something that you are okay with, then go ahead and use their products and services. If it isn’t, then don’t! You can also control what data these companies actually collect on you. For example, at www.google.com/dashboard, you can control exactly what information Google saves, and even delete data they have already collected. The author of “Not OK, Google” seems very distraught by Google’s business model. I think the author is failing to keep in mind that users voluntarily use Google’s services – they are certainly not forced to. An argument could be made that users aren’t fully aware of what exactly Google does with the data they collect, but this is not discussed by the author. Yet, I believe being educated about what companies plan to do with the data they collect is extremely important. Many users don’t step back and look at the bigger picture, that no one service generally gives your life story to a company, but over time, the amount of data you share can be used to learn much more than you think. Companies should never be blindly trusted, but I don’t have a problem when people agree to give access to some of their data in exchange for powerful and useful services. Because of the sheer number of online services, I do fear that constantly considering privacy concerns is becoming increasingly difficult and users are increasingly blindly trusting companies. While it is difficult to learn what exactly companies will do with the data they collect, you also always have to consider that even if they don’t share this information with anyone else willingly, they may be hacked and their databases compromised, meaning your data could fall into the wrong hands. |
AuthorSenior computer science major at the University of Notre Dame ArchivesCategories |