Patent Matters – Don’t Hate the Player, Hate the Game

The recent acquisition of the Netscape/AOL patent portfolio reminded me that an update on Mozilla’s patent strategy is long overdue. This post is about what we’ve done and what we could/should do in the future.

As you may have seen, there’s been a lot of patent litigation activity lately. The Yahoo suit against Facebook is one of the most surprising – at least to me. And the US Supreme Court just recently weighed in to re-affirm a long held axiom of patent jurisprudence that laws of nature are not patentable subject matter, so the judiciary is getting more active as well.

What’s driving the increase of patent activity? There are numerous drivers in my view including increased competition in the mobile space, the desire for competitive advantage particularly if a company is struggling in the market, and demands for incremental license revenues. Invariably, patent portfolios become more attractive tools for revenue and market competition when a business is not doing well or threatened.

The traditional strategy has been for each company to develop the largest possible patent portfolio to act as a deterrent against potential plaintiffs. This is known as a defensive approach. Others make no such claim at all, and still others do a bit of both depending on the circumstances. For early stage companies and start-ups, patent rights may also be important. If the business fails in the market, IP rights may turn out to be the most valuable asset for investors.

I personally struggle with the effectiveness of “build a big patent pool” as a one size fits all approach. It may not work if you’re way behind in the game or even conflicted about software patents. Also, if done organically, it simply takes too long. In other settings it may however make perfect sense, especially with enough resources and sufficient inventive material that is relevant to your competitors. I got to do this for a few years in my first in-house counsel job working for Mitchell Baker long ago where I was tasked with creating the initial Netscape patent portfolio.

So far Mozilla has not adopted the traditional strategy. A while back we made an exception to file four patent applications on some novel digital audio and video compression codecs co-invented with a contributor at the time. We assigned those applications to xiph.org, a non-profit focused on open video and audio codecs. The assignment included a defensive patent provision which prevents the patent from being used offensively. One of those applications has been published for examination as part of the standard USPTO patent application process. We believe that these applications may help in standards settings so we could achieve a better open standard for audio codecs. For better or worse, in the standards bodies participants use their IP to influence the standards and without some leverage, you’re left only with moral and technical arguments. We’ll see if our theory plays out in the future.

We haven’t filed other applications yet, but I don’t think the past should necessarily dictate the future. I can imagine many places where inventive developments are occurring that have strategic value to the industry, and where we want those protocols, techniques, and designs to stay open and royalty-free to the extent they are essential parts of a robust web platform. Ofcourse filing patent applications is one possible technique, but at those strategic intersections, I think we should entertain filing patent applications as one tool in our overall strategy.

In addition to patent filing strategies, there are other things we could  do including:

  • Adopting techniques to constrain offensive use, like the Inventors Patent Assignment with defensive use terms proposed by Twitter today. (+1 for Ben and Amac at Twitter for this)
  • Building out a robust defensive publication program. IBM wrote the book on this, maybe its time to make source code publications work the same way.
  • Developing an ongoing working prior art system available for defendants. We worked on a version of this a few years back, but the urgent beat out the important and no progress has been made since then.
  • Pooling patents with other like minded groups into safe pro-web entities with defensive protections. The pools need to be relevant to competitive threats for this to have value in my view.
  • Creating other disincentives to the offensive use of patents (similar to the MPL defensive patent provision) but relevant to larger parts of the web.

Sometime mid-year, I’d like to have a broader discussion to brainstorm further and prioritize efforts. Nonetheless, I’m pretty confident that given the changing landscape and markets, we’ll need to play in this domain more significantly one way or the other.

SOPA – the Stop Online Piracy Act – Is It Really Dangerous?

Recently, the Stop Online Piracy Act, 112 HR 3261 (SOPA) was introduced as a bill in the US House of Representatives. This is the House companion to the Senate Protect-IP Act that drew considerable opposition from the tech and First Amendment quarters, so many of the issues remain same. The intent of SOPA is to help combat online piracy. This is a laudable goal; however, the unintended consequences are scary for intermediaries, websites with user generated content, DNS providers, and those of us who rely on the Internet as a vibrant and rich communications network.

SOPA grants IP claimants a lot more power than they currently have to remove allegedly infringing content and expands the scope of people who may be liable by giving:

  • the Attorney General the power to compel companies that maintain DNS look-ups to change the tables, also known as domain name filtering. See analysis by Larry Downes.

The problem is that these are powerful remedies made available based upon unproven assertions and little due process. Imagine you’re a website operator, under SOPA you can get your Paypal payment processing services cut-off merely because someone claimed there’s infringing content or apps on your site. Faced with that choice, it’s an easy decision, remove the content early and often just to be safe.

IP rights are certainly important and need to be respected on the Internet, and there is a very real piracy problem, but SOPA threatens an essential attribute of the Internet – its ability to easily share information without friction and permissions. This doesn’t mean that the Internet should be a lawless expanse void of law or consequences either. The challenge is that SOPA exposes intermediaries to undue financial and legal liability for content in a way that will undoubtedly chill the free flow of content and ideas embodied in both software and media. In addition, the language in the bill is ambiguous leaving it open to abuse by plaintiffs who have already demonstrated aggressive interpretations of the existing DMCA framework. This is why there is so much concern that SOPA represents a real and dangerous threat to the Internet.

Some describe this debate in polemic terms, as Hollywood vs. the Internet, where the Internet slowly becomes managed by dominant media interests. Others have focused on the deleterious impact on human rights. Perhaps Masterswitch writer Tim Wu would see this as part of a larger pattern of how open information ecosystems become closed over time. US House Representative Zoe Lofgren, representing voters in Silicon Valley, warns that this “would mean the end of the Internet as we know it.” It could also just be bad legislation.

If SOPA becomes law, few think it will actually solve the problem. For example, it seems clear that blocking domains is not an effective means to combat piracy because domains can be redirected so easily. A while back Homeland Security asked Mozilla to take-down an add-on without a court order or a finding of liability. Under a SOPA regime, it appears the same incident would allow the putative plaintiffs to petition the Attorney General to issue an injunction compelling take-down based only on a specious claim of contributory infringement. Oddly SOPA makes one really appreciate the DMCA.

Many in the tech and policy communities are organizing to oppose SOPA. What’s most important is that Congress hears from everyone on this, whatever their view.  Plus it’s Tuesday November 8th -voting day- so let your voice be heard. If you want to let Congress know that you oppose the legislation EFF and Public Knowledge have sites set up to easily send your message to Congress.

Additional links to the bill and other commentary can be found below.

Read more of this post

Homeland Security Request to Take Down MafiaaFire Add-on

From time to time, we receive government requests for information, usually market information and occasionally subpoenas. Recently the US Department of Homeland Security contacted Mozilla and requested that we remove the MafiaaFire add-on.  The ICE Homeland Security Investigations unit alleged that the add-on circumvented a seizure order DHS had obtained against a number of domain names.   Mafiaafire, like several other similar  add-ons already available through AMO, redirects the user from one domain name to another similar to a mail forwarding service.  In this case, Mafiaafire redirects traffic from seized domains to other domains. Here the seized domain names allegedly were used to stream content protected by copyrights of  professional sports franchises and other media concerns.

Our approach is to comply with valid court orders, warrants, and legal mandates, but in this case there was no such court order.  Thus, to evaluate Homeland Security’s request, we asked them several questions similar to those below to understand the legal justification:

  • Have any courts determined that the Mafiaafire add-on is unlawful or illegal in any way? If so, on what basis? (Please provide any relevant rulings)
  • Is Mozilla legally obligated to disable the add-on or is this request based on other reasons? If other reasons, can you please specify.
  • Can you please provide a copy of the relevant seizure order upon which your request to Mozilla to take down the Mafiaafire  add-on is based?

To date we’ve received no response from Homeland Security nor any court order.

One of the fundamental issues here is under what conditions do intermediaries accede to government requests that have a censorship effect and which may threaten the open Internet. Others have commented on these practices already.  In this case, the underlying justification arises from content holders legitimate desire to combat piracy.  The problem stems from the use of these government powers in service of private content holders when it can have unintended and harmful consequences.  Longterm, the challenge is to find better mechanisms that provide both real due process and transparency without infringing upon developer and user freedoms traditionally associated with the Internet.  More to come.

Recent Changes in US Crypto Export Rules

On January 7, 2011, the US Government published a final export rule that relaxed export rules on publically available encryption code. Previously, mass market, encryption object code software was subject to US export controls. Under the new rule, issued by the Bureau of Industry and Security (BIS), publicly available, mass market, encryption object code software with a symmetric key length greater than 64-bits is no longer subject to the export control rules. Although the change will not have a limited direct impact on Mozilla because our code already falls under the TSU source code exception, the change is good because it simplifies and reduces the number of rules that might restrict distribution of publicly available, mass market, encryption object code software outside the US.

BIS reasoned that because there are no regulatory restrictions on making such software publicly available, and because, once it is publicly available, by definition it is available for download by any end user without restriction, removing it from the jurisdiction of the Export Administration Regulations (EAR) will have no effect on export control policy. Such policy is merely clarified and confirmed by this final rule.

This rule change follows the guidance of government and export law attorneys like Dan Minutillo (he also represents Mozilla) who argued in a recent California International Law Journal article that the Government should remove publicly available encryption code from the scope of items subject to the EAR based on the interpretation of a September 11, 2009 Advisory Opinion by the Director of Information Technology Controls Division, Office of National Security and Technology Transfer Controls, US  Government.  It seems this 2009 Advisory Opinion can be interpreted to relate directly to a Voluntary Self Disclosure filed by Minutillo on behalf of Mozilla regarding the exchange of code that resulted in a “No Violation Letter” from the US government in Mozilla’s favor.

Read more of this post

FCC Chairman Genachowski’s Proposal on Net Neutrality

Today, the Chairman of the Federal Communications Commission, Julius Genachowski, issued a statement outlining a proposal to codify the open Internet principles and provide some resolution to the net neutrality debate. If adopted by the FCC in a hearing scheduled for December 21st, the new rules would represent a significant step forward in protecting users and innovation on the Internet.  No doubt, the rules will not be perfect, nor achieve all of the aims sought by net neutrality proponents.  As a whole, they represent major progress and reflect a delicate balance of the concerns of an array of stakeholders with often competing interest.

As proposed, the rules will establish needed non-discrimination, no blocking, and transparency principles for wireline communications.  On the wireless side,  it will establish rules that will prohibit blocking by carriers of lawful web sites or competitive voice, video, or telephony services, and require transparency in network management practices.

We urge the FCC to continue its efforts to promote rules that encourage a single framework regardless of the type of network.  In the long term, there is “one Internet” and the rules should be the same for wired and wireless transport particularly given the importance and growth of wireless Internet access. Nonetheless, we’re pleased by the Commission’s efforts to protect the Internet and the qualities that have made it both so valuable and transformative.

Other Articles:

http://voices.washingtonpost.com/posttech/2010/12/fcc_chair_announces_net_neutra.html

Net Neutrality – Comments to the FCC

The FCC recently asked for additional comments in its ongoing proceeding regarding Open Internet Principles. In particular, the FCC sought specific input on whether the openness principles should apply to both wireline and wireless networks.

We submitted comments in response to the FCC’s inquiry supporting application of the Open Internet principles to wireless networks. Relevant portions of the submission are shown below:

There is, and should be, only one Internet. Historically, the Internet has not distinguished between various forms of content or how users access such content. This non-discrimination has allowed consumers and software developers to choose between locations, platforms, and devices, all without complex negotiations with transport networks. This freedom has been a key reason why the Internet is so creative, competitive, and consumer-friendly. Internet users now benefit from this flexibility as they access the Internet across a wide range of devices and access points including 3/4G, WiFi, and wired networks. The wave of new Internet enabled mobile devices, such as the iPhone, iPad, and a broad range of smartphones, including Blackberry, Palm, and Android based devices, will continue to drive exponential increases in mobile Internet access. The central fact is that wireless Internet access is as important as wired Internet access.

The increasing importance of mobile networks is not the only reason policy should be network agnostic. Users should not have a significantly different experience as they move back and forth between connection types, and they should not have to be aware that one regulatory regime (applicable to wired and WiFi access) protects their ability to access content of their choosing, while another regime (for mobile wireless) does not. At the end of the day, users are not deciding to access a “wired platform” and then a “wireless platform” – they are simply deciding to access the Internet, and their access to content should not depend on how they happen to connect at any given moment. Given the undisputed importance and growth of wireless Internet access, the value created by keeping all Internet access open and neutral, and user expectations of a single Internet, it is imperative that the Commission protect the entire Internet, not just the wireline portion. The best way to do this is to extend the open Internet principles to wireless providers and protect the Internet, not the network.

We trust the FCC will consider these comments, and the many others like them, in reaching its final decision.  You can submit your own comments here.

Related Links:

Search FCC for other comments

Open Internet Coalition Comments

CDT Comments

Nero Antitrust Claims Against MPEG LA – Recent Court Action

What does the recent district court’s dismissal of Nero’s antitrust claim really mean? Perhaps, not too much. The Court dismissed Nero’s antitrust claim with leave to amend. This is a common procedural move when the Court needs additional factual information to validate that a statutory claim has been made.  At this stage of a proceeding, the Court is not rendering decisions on the merits of the claim, only whether the pleadings themselves properly state a claim.

It seems the Court understands and accepts Nero’s antitrust legal theory.  Otherwise, the Court would not have given Nero leave to allege additional facts supporting that theory.  It seems that an amended complaint that adds specific patent expiration dates and examples of nonessential patents in the MPEG-2 patent portfolio of 900+ patents would be enough to carry the day.

Some other perspectives and background:

http://weblogs.mozillazine.org/roc/archives/2010/05/nero_vs_mpegla.html

http://www.law.com/jsp/cc/PubArticleCC.jsp?id=1202458503025

http://news.ycombinator.com/item?id=1372536

Legal Reference Terms – Survey Results

Last fall we posted a survey designed to test some assumptions about whether legal transactions could be more efficient – the classic better, faster, cheaper proposition- and perhaps even more rational.  The questions were designed, if that word can used without insulting the science of the survey industry, to test both some assumptions about the causes of transactional friction and potential solutions.  The responses suggest two conclusions:

  1. Some type of standard terms would improve technology transactions (55%, increasing to 83% with those responding “maybe”); and
  2. Statistical data on the probability of certain risk scenarios would make the negotiation of technology transactions better (76%).

You can see the complete results at SurveyMonkey – Survey Results, but here are a few more details:

  • Nearly 80% said they would use such terms under certain conditions;
  • The top four most heavily negotiated terms were: i) indemnification; ii) license rights; iii) change of control; and iv) payment terms; and
  • 72% of the respondents don’t currently use any statistical data to assess risk probabilities while negotiating transactions.

The first premise is that some of the difficulty (time/cost/value) in conducting legal transactions, at least technology deals, is caused in part by the absence of any common legal framework. Thus, too much is negotiated with little return in real value to the client. In this case, a potential solution may look like atomic contract elements, not a prepackaged license or form, but rather constituent parts such as predefined legal terms, provisions, and risk allocation choices, that could serve as building blocks for a complete transaction. See for example,  Incoterms like “FOB” used to allocate shipping transport risk. Of course, there will always be some “negotiation”, but certainly every term doesn’t need to be uniquely defined and negotiated each time. It’s simply not scalable nor a good use of resources.

The other premise is that the use of empirical data on risk probabilities would improve the outcome of negotiations or at least reduce the deal cycle. The underlying supposition is that if both parties truly understood the likelihood of certain outcomes, they would approach negotiation efforts more efficiently and better focus. Think actuarial tables for transactions.

Although the sample size of this survey was predictably small – no prizes and no sweepstakes because the legal issues are so complex – there were we still nearly 50 respondents consisting of largely of legal professionals involved in technology transactions. There is also an element of bias because the respondents are for the most part attorneys whom I may know professionally.

From here it seems like the next step is to bifurcate the two projects, identify organizations and individuals who might be interested in pursuing this further, and map out the next steps.  It is still not clear whether such initiatives are feasible or if there is enough incentive to make either happen. For sure, if even a few influential companies adopt such a legal reference framework, it would really make a big difference. A number of practitioners and academics have also expressed interest in one or both aspects of the survey topics. Some have already created frameworks for financial transactions, and related efforts seem to be increasingly sprouting up.

We’ve already received some great feedback, including ideas from Gillian Hadfield of USC School of Law, and folks at the Berkman Law lab such as Oliver Goodenough. I’ll summarize some of those recommendations in subsequent posts. For now, if you have further ideas, interest, or perhaps you’ve already done this and have experience that you would like to share, please feel free to post a reply or contact me directly.

Creative Commons Research & Findings on Non-commercial

Creative Commons released the results of an interesting survey to “explore understandings of the terms “commercial use” and “noncommercial use” among Internet users when used in the context of content found online.” Read and enjoy here.

This is some good work that can create a better understanding of what “non-commercial” means. I know many folks, including myself, have struggled from time to time with what it means exactly, as in “is this use commercial?”  The survey seems to confirm that the definition is a function of the circumstances and numerous factors around the use, but people seem to share at least a general understanding.  Look forward to further analysis of the findings.

Better, Faster, Cheaper Negotiations? Take the Survey and Let Us Know

Click here to take the survey

One topic that many legal practitioners have talked about lately (and for years actually) is legal friction. Legal friction can describe many kinds of obstacles from regulatory to course of dealing, but most often it describes the impediments (delay and fees) encountered when negotiating transactions whether for software, services, or any form of property. More specifically legal friction happens when the commercial terms are generally agreed but prolonged negotiations continue while each party attempts to work out the legal terms. No doubt this is a pain point for both the business owners and counsel.

The legal terms effectively allocate risk between the two parties based on some perception of likely contingencies and risk profiles.  Sometimes they’re really important and form key parts of the deal, but most often not.  Unfortunately the problem is exacerbated by the fact that everyone drafts their own terms in a manner they think is best and often most favorable to their own interest. Since each term is handcrafted to perfection, the other party has to examine each term to determine if it comports with their own requirements. This adds unnecessary time and expense and delays starting on the actual commercial arrangement which is the whole point.

In the FOSS space, the open source licenses themselves reduce legal friction to the extent the rights and obligations of the parties are known, immutable, and seemingly well understood. Thus, there’s no negotiation over the terms. Creative Commons has also done this really well so the focus is on the exchange of the creative and the actual agreement doesn’t get in the way.  Recently there were some standard venture capital terms published by TheFunded as reported by Venture  Beat that serve the same purpose.  In each of these cases, the standardized agreements represented a clustered set of terms with values of simplicity and market norm, that work for some set of transactions.

Given the above, it seems like the same concepts could be extended for other kinds of software and technology transactions with just a little modification.

  • Suppose there were a set of reference terms (atomic v. whole licenses) that were available for transactions that were widely adopted. In this setting, parties could incorporate the standard terms to reduce negotiation friction and uncertainty.
  • Ideally the terms would represent a range of values, including the compromise positions that are fair to both parties.  Such terms could even be used in online terms of service agreements.
  • In the maritime world when shipping goods was the thing and property was “real” the INCO terms were heavily used to allocate risk, i.e. FOB.  So in this context, imagine a set of terms that worked for IP and service based transactions that could be incorporated into agreements to varying degrees.

Obviously there are a few small details like developing the reference terms and getting adoption, but it seems like there is a fair amount of pain in this area so I suspect there are folks who would want to work on the solution.  If this has already been done or tried, please advise, but if not, would welcome feedback via the survey below or post a comment if it works better for you. The goal is to determine if any of these assumptions are correct and to test the viability of potential solutions.

Click Here to take survey