Our “shop for free online” paper

Our paper “how to shop for free online” is getting some publicity recently, which I am happy about, although Shaz and I did not directly participate in any of the interviews due to a non-academic reason. I compiled some news articles below:

Researchers find major flaws in online payment systems. CNN, April 13, 2011.
Exploit-wielding boffins go on free online shopping binge — World’s biggest e-commerce sites wide open, Register, April 12, 2011
Could criminals shop for free online? CNET, April 11, 2011
• Security Researchers Exploit Logic Flaws to Shop for Free Online, Network World, April 11, 2011

The paper talks about 9 logic bugs in a set of representative merchant apps that integrate third-party cashier services PayPal, Amazon Payments and Google Checkout. The shopper is assumed completely malicious, and thus can play tricks to tell slightly inconsistent stories to the merchant and the cashier. As a result, the cashier is not 100% sure about “how much”, “to whom”, or “for which order” the shopper should pay; the merchant is not 100% sure about “how much”, “to whom”, or “for which order” the shopper did pay.

All the above news articles only talk about the bugs. Another interesting study in the paper is in Section V, in which we used Poirot (developed by MSR Redmond’s RiSE group and MSR India) to measure the logic complexity of a checkout mechanism. If you are a professor/lecturer who needs a real-world example to show benefits of formal methods, I recommend you to check out our case study page. The page is designed as a homework to challenge your students.

Posted in research | Leave a comment

I made tools; you should use; there’s no excuse. If you don’t; I don’t care; it’s still my success to declare.

Some people in the program verification community seem to treat tools as solutions: “Because I invented a verification tool, you should have used it. You didn’t use it? Well, that’s your problem, but I am still successful because the whole ‘research’ problem has been solved.”

Verification is a heavy tool. Don’t tell people to use it just because it has been invented. It is an essential part of researchers’ responsibility to tell them where exactly to use it and convince them why they should. (BTW, this job is perhaps beyond the verification community’s reach, and falls into systems researchers’ scope.) An oil rig “can” get oil from the earth, but people won’t blindly use such a heavy tool everywhere. They need a geologist to tell them where to drill, with convincing evidences that there is indeed oil underneath. The geologist does not build the oil rig (he/she probably doesn’t even care about the oil rig), but I would certainly call his/her work important research.

Posted in research | Leave a comment

Why am I obligated to show that I am no smarter than a machine?

I attended an internal talk a few weeks ago (not sure whether I should disclose the presenter’s name). It was about a new technology, but the point in the talk that I appreciate the most is that the goal of this technology is to make valuable findings, rather than automation.

Here is my thought along this line. It seems that automation is an important metric that many people use consciously or unconsciously to evaluate research contributions in all areas in computer science. It doesn’t matter how surprising/novel/insightful your findings are, people care very much about whether your thinking process was automated; it doesn’t matter how challenging it is to analyze real-world systems due to their messiness, people care very much about whether neat models extracted from real systems can be automatically checked, but forget about the intelligence for obtaining these models. At least, such an intelligence is not considered science, because it is not automated.

Why is that? Why do we believe so deeply that ONLY automated thinking is science? Why do we devalue so much the very portion of human’s superior intelligence that machines cannot mimic? Other science communities do not have such a belief. Did people criticize Newton’s laws of motion because Newton didn’t come up with them mechanically, but relied on many empirical experiments and his smart brain? I believe that the real science is advanced by HUMAN’s intuition and creativity. A scientist’s goal is to show discoveries that surprise the world. He/she does not have the obligation to show that such cool discoveries would have been made by a robot as well.

Most of us agree that computers are fundamentally dumb machines, regardless of how science fictions depict them. There is nothing wrong for a scientist to be smarter than a machine. Perhaps the mindset of “only automatable thinking is science” is hurting our field, because it boosts so many papers containing mediocre-yet-automatable ideas, and knocks out others containing insightful human thoughts.

I told my daughter that I am a “computer scientist”. Now this phrase looks very confusing to me. Maybe I should call myself simply a “scientist”, which makes it clear that I am still a human being.

Posted in research | 3 Comments

We discovered a security vulnerability in Facebook’s authentication

Several weeks ago, Rui Wang and Zhou Li, under guidance of Prof. XiaoFeng Wang and me, discovered a security vulnerability in one of Facebook’s authentication mechanisms. We privately notified Facebook soon after. It was fixed last week. Facebook security team considered this a “serious vulnerability”. They acknowledged us on the Facebook Security White Hats page.

A video showing the attack is uploaded to YouTube. The vulnerability allows a malicious website to impersonate any legitimate website.  As shown in the video, this has a number of implications: (1) any user with a valid Facebook session will lose his/her anonymity and privacy.  Specifically, any website (e.g., with embarrassing or sensitive contents) can obtain the user’s name registered on Facebook, which is typically his/her real name. This is because we can impersonate Bing.com, which can get the user’s basic information. No user consent is required. (2) if the user has ever granted any website, such as NYTimes, YouTube, Farmville or ESPN, the permission to connect to his/her Facebook account, further damages can be inflicted, which includes disclosure of private data that the user does not want to share with others, and impersonation of the user to post bogus news/comments/updates on friends’ walls.

This article gives details about how a malicious website can steal the authentication token that Facebook tries to pass to the victim website: Informatics students discover, alert Facebook to threat allowing access to private data, bogus messaging

The patience and the agility demonstrated by Rui and Zhou leading to this finding impressed me a lot. This authentication mechanism is a part of Facebook’s platform code. I would like to think that it had been carefully examined by many pairs of eyes for its security. Rui and Zhou started with a number of hunches, but after they actually tried these ideas, they were quite frustrated. At one point, they felt that what they were doing was in a direct confrontation with what Facebook tried to block. Despite the initial frustration, they kept going deeper, until the final missing piece was found – the unpredictable domain communication of Adobe Flash. Really nice job, guys!

Here are a collection of news articles about this finding since yesterday:
Facebook flaw allowed websites to steal users’ personal data without consent, Naked Security
Facebook plugs gnarly authentication flaw, Register
New Facebook vulnerability patched, ComputerWorld
Facebook Fixes Security Vulnerability, eWeek

Posted in research | Leave a comment