Wednesday, July 28, 2010

A Blog Entry for CS889 Project & Howto

For the project in CS889 class, a feedback form application is created. This widget is named "feedbackform.h". This feedbackform.h contains the following methods:
GtkWidget* avator_frame_new()
GtkWidget* oneFeedback(char* vote, char* feed)
char *truncate(size_t start, size_t stop, const char *input, char *output, size_t size)
GtkWidget* feedback_entry_new()
GtkWidget* ffgenerater(GtkWidget *button, gpointer data)
In addition, There are "avatar.jpg" and "db.txt", and they are needed to demonstrate feedback form in action. The following is the contain of db.txt:
[A]:<8>:{A would be great, if it can do this.}
[B]:<8>:{B would be great, if it can do this.}
[B]:<7>:{B would be great, if it can do that.}
[A]:<7>:{A would be great, if it can do that.}
[A]:<4>:{Instead of doing that, you can do this.}

Each line in db.txt represent one feedback from one user, and the following is the description of each line in db.txt:
[function-in-Sample]::{feedback-from-user-about-function}

To demonstrate how feedbackform.h works, there is a sample.c file which represents a generic application which needs user feedback for its functionality. To call methods inside freedbackform.h, the sample.h must includes feedbackform.h as a library. After that, the sample.c can call the ffgenerater method within its body. Whenever the ffgenerater method is called within the sample.h, it generates the complete feedback form. The following is the contain of the ffgenerater method:

GtkWidget* ffgenerater(GtkWidget *button, gpointer data)
{
g_print("The current function for feedback: %s\n", (char *) data);
GtkWidget *window;
GtkWidget *vbox0, *vbox, *hbox;
GtkWidget *frame;
window = gtk_window_new(GTK_WINDOW_TOPLEVEL);
gtk_window_set_title(GTK_WINDOW(window), "Feedback");
gtk_window_set_default_size(GTK_WINDOW(window), 500, 500);

vbox0 = gtk_vbox_new(FALSE, 0);

vbox = gtk_vbox_new(FALSE, 0);
frame = gtk_frame_new("Feedback from others...");

gtk_container_add(GTK_CONTAINER(window), vbox0);
gtk_box_pack_start(GTK_BOX(vbox0),avator_frame_new(), TRUE, TRUE, 10);
gtk_box_pack_start(GTK_BOX(vbox0),frame, TRUE, TRUE, 5);
gtk_box_pack_start(GTK_BOX(vbox0),feedback_entry_new(), TRUE, TRUE, 10);
gtk_container_add(GTK_CONTAINER(frame), vbox);

FILE *file=fopen("db.txt","r");
char tmp[256]={0x0};
while(file!=NULL && fgets(tmp, sizeof(tmp),file)!=NULL)
{
if (strstr(tmp, (char *) data))
{
char vote[10], feedback[40];
gtk_box_pack_start(GTK_BOX(vbox), oneFeedback(truncate(strrchr(tmp,'<')-tmp+1,strrchr(tmp,'>')-tmp,tmp,vote,sizeof vote), truncate(strrchr(tmp,'{')-tmp+1,strrchr(tmp,'}')-tmp,tmp,feedback,sizeof feedback)), TRUE, TRUE, 5);
}
}
if(file!=NULL) fclose(file);
gtk_widget_show_all(window);
gtk_main();
return 0;
}

The feedback form interface is consisting 3 portion: Introduction, Feedback Form others (users), and New Feedback Entry. The Introduction portion is generated by the avator_frame_new method. The following is the contain of the avator_frame_new method:

GtkWidget* avator_frame_new()
{
GtkWidget *frame, *hbox, *image, *message;
image = gtk_image_new_from_file("avatar.jpg");
message = gtk_label_new ("Hi! I'm hoping that with your help\n we can improve this program.");
frame = gtk_frame_new("Introduction");
hbox = gtk_hbox_new (FALSE, 0);
gtk_container_add (GTK_CONTAINER(frame), hbox);
gtk_box_pack_start(GTK_BOX(hbox), image, TRUE, TRUE, 0);
gtk_box_pack_start(GTK_BOX(hbox), message, TRUE, TRUE, 0);
return frame;
}

The avator_frame_new method imports "avatar.jpg" and shows a short message from the developer. This method is called by the ffgenerater method.

The Feedback From Other portion is generated by the main body of the ffgenerater method with support of the truncate method and the oneFeedback method. The db.txt is read to retrieve feedbacks from other users to disply in the FeedBack From Other portion.

The following is body of oneFeedback method:
GtkWidget* oneFeedback(char* vote, char* feed)
{
GtkWidget *frame, *hbox;
GtkWidget *gtkVote, *gtkFeed;
GtkWidget *vote_button;
strcat(vote, " people like it. \nCLICK TO VOTE!!");
vote_button = gtk_button_new_with_label(vote);

frame = gtk_frame_new("");
hbox = gtk_hbox_new (FALSE, 0);
gtkVote = gtk_label_new (vote);
gtkFeed = gtk_label_new (feed);

gtk_container_add (GTK_CONTAINER(frame), hbox);
gtk_box_pack_start(GTK_BOX(hbox), vote_button, TRUE, TRUE, 0);
gtk_box_pack_start(GTK_BOX(hbox), gtkFeed, TRUE, TRUE, 0);
return frame;
}

And the following is the body of the truncate method:
char *truncate(size_t start, size_t stop, const char *input, char *output, size_t size)
{
int count = stop - start;
if (count >= --size)
count = size;
sprintf(output, "%.*s", count, input + start);
return output;
}

The New Feedback Entry portion is generated by feedback_entry_new method. The feedback_entry_new method displays a list of feedback question and provide textfields for feedback input. This method is called by the ffgenerater method, and the following is the contain of the feedback_entry_new method:
GtkWidget* feedback_entry_new()
{
GtkWidget *frame;
GtkWidget *vbox;
GtkWidget *label1, *label2, *label3, *label4, *entry1, *entry2, *entry3, *entry4;
GtkWidget *submit_button;

label1 = gtk_label_new ("I would appreciate you including your email address\n so I can contact you for further assisstance. (optional)");
label2 = gtk_label_new ("Please describe how this function helped or hindered\n the completion of your task?");
label3 = gtk_label_new ("Did this function do what you expected?");
label4 = gtk_label_new ("Can you tell us what enhanced your experience or feel\n needs to be improved.");

// Create the entry widgets for 4 feedback questions
entry1 = gtk_entry_new();
entry2 = gtk_entry_new();
entry3 = gtk_entry_new();
entry4 = gtk_entry_new();

// Create submit button
submit_button = gtk_button_new();
submit_button = gtk_button_new_with_label("Submit feedback!");
g_signal_connect(GTK_OBJECT(submit_button), "clicked", G_CALLBACK(gtk_main_quit), NULL);

frame = gtk_frame_new("New feedback entry");

vbox = gtk_vbox_new (FALSE, 0);
gtk_container_add (GTK_CONTAINER(frame), vbox);

gtk_box_pack_start(GTK_BOX(vbox), label1, TRUE, TRUE, 0);
gtk_box_pack_start(GTK_BOX(vbox), entry1, TRUE, TRUE, 0);
gtk_box_pack_start(GTK_BOX(vbox), label2, TRUE, TRUE, 0);
gtk_box_pack_start(GTK_BOX(vbox), entry2, TRUE, TRUE, 0);
gtk_box_pack_start(GTK_BOX(vbox), label3, TRUE, TRUE, 0);
gtk_box_pack_start(GTK_BOX(vbox), entry3, TRUE, TRUE, 0);
gtk_box_pack_start(GTK_BOX(vbox), label4, TRUE, TRUE, 0);
gtk_box_pack_start(GTK_BOX(vbox), entry4, TRUE, TRUE, 0);
gtk_box_pack_start(GTK_BOX(vbox), submit_button, TRUE, TRUE, 0);
return frame;
}

This feedback form application (feedbackform.h) is written in C with GTK library. Since GTK is licensed under GNU Lesser General Public License and this application is written using GTK, this application is considered to be FOSS. Any developer is welcome to modify its source code or to implement it into her own program.

Monday, June 28, 2010

Summary for readings on June 28th

[1] presents results form an empirical study of OSS developers' opinions about usability and the way usability engineering is practiced in a variety of contemporary OSS projects:
“to understand current practices and obstacles to change”
“focus on projects carried out by small groups of volunteer”

Study in [1] consists of 3 element: "(1) an online questionnaire survey answered by contributors to a variety of OSS projects, (2) interviews with three OSS developers and (3) interviews with five usability evaluators for OSS projects."
Questionnaire consisted of 3 parts: ‘About your current project’, ‘Commu-nication’ and ‘Usability’. The questionnaire quantitative data collected was organized in sections reflecting the 3 focus areas of the questionnaire.
The developer Interviews was preformed with 3 of the respondents (2 project managers and 1 usability tester) from the questionnaire survey. This interview investigated the following themes:
• Respondent’s motivation for contributing to OSS
• Usability considerations used in the project
• Frequency and place of usability evaluations
• Usability as a part of the development process
• Usability experts in the development team
• Willingness to alter program code because of usability problems discovered in tests
• Decision making in the project, especially regar-ding usability

To analyse the data collected in the interviews:
"First we identified a number of topics or tendencies we found impor-tant in the transcriptions. Following this, we analyzed the statements in more detail to extract the overall opinion of the respondent."

Finally, the evaluator Interviews are involved 5 employees at Relevantive who had experience in OSS development and usability engineering. This interview investigated the following themes:
The evaluator interviews are conducted in the following procedure:
• Test procedures of OSS
• Usability evaluation
• Communication with OSS developers
• K Desktop Environment (KDE) guidelines
• Remote usability evaluation
• OSS and usability in general

Following the above procedure, they had conducted personal interviews with 2 employees:
“For these interviews we did not use a pre-constructed interview guide, but instead we used the data collected from the focus group interview to find themes to explore further. Third, we had the opportunity to observe them while they conducted a usability test of an OSS product.”

The data collected in the evaluator interviews was analysed the same way as data in the developer interviews.
By analysing the collected data, [1] has shown the reasons why people are motivated to contribute to OSS:
“In the questionnaire answers, 88% of the developers chose ‘To strengthen free software’ as their motivation,…, and the interviews with developers and usability professionals supported this. In addition, 54% of the questionnaire respondents choose ‘Community reputation’ as a motivation….75% of the developers contribute to OSS in order to improve their skills and 88% wanted to be intellectually stimulated.”

[1] states that “83% of the questionnaire respondents regarded the importance of usability as either ‘high’, ‘very high’, or ‘extremely high’. Only 13% considered it ‘moderate’, 4% stated ‘slight’ and nobody though it had no importance,…”
The study shows that OSS developers have different definition of “usability” too. In addition, OSS developers also had shown reluctance working with usability experts even though they wanted a higher degree of usability in their software.

OSS development process is characterized by short iterative cycles consisting 4 main stages: “In the beginning”, “Iteratively”, “In the end”, and “During testing”. And the OSS developers had different ideas on when usability belonged in the development process.

This paper also suggests that trust relationship is required between OSS developers and usability experts prior the cooperation between the two:

“For instance Relevantive experienced that almost all problems faced when working with OSS developers were grounded in lack of trust, which made developers ignore suggestions from usability professionals.”

OSS projects in general had flat, and organizational structures OSS development is always democratic while the traditional development process is led by a formal leadership:

“…even though almost every OSS project had at least one project manager associated, this title did not imply leadership over the project. Often this title reflected the person who founded the project rather than the person who kept track of everything or delegated tasks to other contributors.”

[1] states 3 common mantras about OSS:
- OSS development is always democratic
- OSS will solve the ‘software crisis’
- Usability problems are just bugs

[2] is intended to answer the following questions:
• How do open source developers define and conceptualize the notion of usability?
• What motivations do FOSS developers have for creating software that is usable by people other than themselves?
• What are current usability practices in the FOSS community?
• How do FOSS usability practices differ from traditional usability practices?

The interview process in [2] consists of the following segments:
“1. Obtaining basic background information on the participant, such as their day job and what FOSS project they are associated with
2. Learning how and when they got involved with their FOSS project. If they were a project member, we also asked what they do in the project and why they stay involved
3. Their perception of the concept of “usability”
4. How they practice or perceive others practicing usability in the project”
One interesting thing to be noted is that the interview participants could define “usability” anyway they want during the interview:
“As can be seen, the definitions of usability span the range of definitions commonly found in the HCI textbooks, and demonstrate that the community, as a whole, possesses a fairly sophisticated, well-rounded notion of the concept.”

There are a number of ways to improve the usability such as interactions between developers and end-users. Tool such as IRC and mailing lists are the communication channels between developers and end-users. However, user does not explain the problem thoroughly and makes it really hard for the developer to help.
Developers do not really use the application they built. The feedback from the user community and the application tests are important to build more stable version of the application.
Usability experts need to educate FOSS project members “about how to think about and practice usability/UX on a day-to-day basis” when they join the FOSS projects.
Social relationships between usability experts and developers serve as primary motivators for addressing usability issue on a day-to-day basis.
The software user size and praise and positive feedback from the users are what motivating the OSS developers to look into the usability issue.
[2] finalizes that the usability and HCI of OSS applications can be improved in the following 2 perspectives:
- Improving Practices from within the FOSS Community
- Reconceptualizing HCI for FOSS Development

REFERENCE
[1] http://itc.ktu.lt/itc353/stage353.pdf
[2] http://doi.acm.org/10.1145/1753326.1753476
[3] http://doi.acm.org/10.1145/1753326.1753576

Wednesday, June 23, 2010

Summary for readings on June 23rd

[1]'s title is "Exploring Usability Discussions in Open Source Development". [1] talked about usability issues within bug report mechanism in current OSS projects. Study on OSS development has shown the importance of data available in public software repositories and bug database but has paid little attention on data management. [1] has preformed quantitative analysis on the Greenstone mailing lists and the Bugzilla instances at Mozilla and GNOME. The study is looking for answers of the following questions:
What is the nature of usability discussions in OSS projects?
Is it different from what might be expected from the textbooks on how to do interface design?
Is it different from commercial software design?
What are the patterns of discourse and process that emerge within and across projects?

The study in [1] is exploring the bug databases for terms such as ‘usability’, ‘human computer interaction’, ‘interface’ etc. Author assumed that "bug reports that contained those words and were determined to be indeed about usability" and is to be investigated.
The bug report mechanism such as Bugzilla studied in [1] has shown a number of drawbacks:
"sometimes even a screenshot is not sufficient to uniquely identify the problem."
"...bug reports can reveal information about the reporter and the reporter here has clearly gone to significant trouble to obscure the text whilst still reporting the bug."

Rewording text elements of interfaces occurs relatively frequently:
"-Many usability problems can be addressed quickly and cheaply by rewording."
"-In our experience of teaching usability, it is a context where relative usability novices can play a useful role, serving an apprenticeship before moving to more complex problems."
"-Talking about the wording of interface elements is much easier to do in a mostly text based interaction medium such as Bugzilla than talking about graphical elements or interaction processes."

While fixing bug according to the bug report, the developer must avoid “ripple effect”. Ripple effect of bug fixing is demonstrated with the example of “dialog box” problem:
“In fixing this bug, it creates or accentuates other bugs; dialog boxes whose information no longer fits within the pane. Resizable dialog boxes had been used as a workaround for this problem, although one that various commentators to bug A saw as rather clumsy. The consequence was that fixing one bug created the need to fix other bugs.”
“Subjective usability bugs may need a more provisional approval process, while more evidence is collected of the relative incidence and severity of the bug….a duplicate identification tool would be a valuable addition to OSS projects.”

Bug reporting and classification are important to the OSS development:
Reporting tools that automatically provide contextual metadata further reduce the effort required by bug reporters. Subjective usability bugs may need a more provisional approval process, while more evidence is collected of the relative incidence and severity of the bug. Use of suitable keywords could distinguish provisional subjective bugs from more objective established bugs. That would enable investigation to continue without adding undue complexity to the system, and avoid premature discarding of a partial bug report.

Our analysis supports this suggestion and we note that such a tool's effectiveness is partially based on bug metadata. Tools such as GNOME Bug Buddy and the Bugzilla Helper promote structured textual reports but the clarification dialogues shown in Figures 1 and 3, and in numerous bug discussions, show that metadata is more valuable. Bug metadata more directly supports querying and partitioning of the bug reports, which should help to reduce duplicates and parallel bug discussions. Classifying any bugs is a complex process

However, classifying bugs (both usability bugs and functionality bugs) is a complex process:
One approach to classifying usability bugs may be to use the structure of the user interface itself as a hierarchical classification system. That is, the menus, sub-menus and dialog boxes of the interface become nodes in the classification hierarchy of the bug repository, so that a preferences bug can be located directly from the system's interface.

Bug report management to improve usability is a difficult task. [1] has described the research usability problem with specific key words in the database, but people, especially the non-technical people, can use different wording to describe the same problem without mentioning ‘usability’, ‘human computer interaction’, and interface’.

[2] is titled "Silver bullet or fool's gold: supporting usability in open source software development". [2] is a abstract by Twidale, who is one of the author of [1]. This abstract is about Twidale’s talk on the problem of creating usable interfaces for non-technical end-users. In this talk. Twidale is suggesting a coordination of end-users and developers in order to improve the OSS usability.

REFERENCE
[1] http://dx.doi.org/10.1109/HICSS.2005.266
[2] http://doi.acm.org/10.1145/1062455.1062468

Monday, June 21, 2010

Summary for readings on June 21st

[1] uses Mozilla as subject for usability study, and the user contributions to the Mozilla bug repository into 3 major sections:
1) Separating contributors into four categories of core and active developers, reporters, and users.
2) Analyzing the outcomes of reports written by different contributor groups.
3) Analyzing user and reporter comments in both routine and contentious reports, and developers responses."

[1] performs study in 3 group in the CORE developers, ACTIVE developers, and REPORTERs. Interesting trends are shown in the course of Mozilla development:
"Figure 1 shows the number of contributors commenting every six months since Netscape released their source code in March 1998. Several things are evident from this graph. First, the USER and REPORTER groups are the only groups that fluctuate substantially in their contributions over time; the CORE and ACTIVE developers, in contrast, wrote a comparable number of comments each six months..."

There are 8 types of report resolutions:
fixed reports lead to a change in the software (a patch); incomplete reports are missing data needed to fix an issue; invalid reports identify a problem, but not one that Mozilla was responsible for fixing; worksforme reports did not involve a problem; wontfix reports identify issues that the community decided not to address; and duplicate reports regard issues that have already been reported. We omit the expired and moved resolutions from our analyses, since they were used infrequently."

And,
About 62% of ACTIVE reports and 60% CORE reports are marked fixed, and these account for 79% of all fixed bugs. In contrast, 13% of REPORTER reports are marked fixed, accounting for 21% of fixed reports....Fixed REPORTER reports were also open significantly longer than ACTIVE and CORE reports (RS χ2(df=3,n=148,902) =15,854,p<.0001). The median fixed REPORTER report was open for 371 days whereas the median fixed ACTIVE report was open for 123 and CORE was 119.

[1] also shows that reporter contributions are less frequent, less useful:
the number of REPORTER reports has been dropping since the 0.1 release of Firefox, and the number of fixed reports has dropped with it. In Figure 4, we see that the proportion of REPORTERs’ report resolutions have stabilized, except for an increase in invalid and incomplete reports after the release of Firefox 1.0. In Figure 5, we see that the proportion of fixed reports due to REPORTERS reached its peak with the release of Firefox 1.0, and has dropped since. In fact, of REPORTERS fixed reports, 69% were fixed before the Firefox 1 release.

There are some possible causes to these trends:
..., perhaps most REPORTER effort before the release of Firefox 1.0 was from technically skilled REPORTERS, enthusiastic about the first release of the browser, but after this, less technically skilled REPORTERS dominated the reporting class, leading to more incomplete and invalid reports and fewer fixed reports. It is also possible that those REPORTER reports that would have been marked fixed began being marked duplicate instead, as ACTIVE developers became better at finding and reporting problems before REPORTERS reported them. Another interpretation is that as Mozilla software improved, there were fewer issues to report,...

[2] examines how the open source development process influences usability and suggests usability improvement methods that are appropriate for community-based software development on the Internet There are 5 charactors to describe usability, namely "ease of learning, efficiency of use, memorability, error frequency and severity, and subjective satisfaction and from other characteristics such as reliability and cost.".
Study has been done by comparing OSS and proprietary software, but the following differences between the two might influence such comparisons:
development time, development resources, maturity of the software, prior existence of similar software etc. Some of these are factors are characteristic of the differences between open source and commercial
development but the large number of differences make it difficult to determine what a ‘fair comparison’
should be.

[2] presents a set of features of the OSS development process that appear to contribute to the poor usability problem:

1. Developers are not users
2. Usability experts do not get involved in OSS projects
3. The incentives in OSS work better for improvement of functionality than usability
4. Usability problems are harder to specify and distribute than functionality problems
5. Design for usability really ought to take place in advance of any coding
6. Open source projects lack the resources to undertake high quality usability work
7. Commercial software establishes state of the art so that OSS can only play catch-up
8. OSS development is inclined to promote power over simplicity

[2] suggests the followng approaches to improve OSS usability:
Commercial approaches
Technological approaches
Academic involvement
Involving the end users
Creating a usability discussion infrastructure
Fragmenting usability analysis and design
Involving the experts
Education and Evangelism

[3] is a blog post written by Professor Terry titled “How Open Licenses Affect User Experience”. The open license leads 4 user experiences such as “Greater choice and variety in software for end-users”, “New types of workflows”, “Software as a communication medium” and “Increased collaboration”.
The 1st user experience is the “greater choice and variety in software for end-users”. Open licenses lead to more software choice via “forking and customization”, “enabling libraries”, and “never dying”. Open licenses lead to variety is through forking and customization. Enabling libraries lead to more choices for end-users, and, as a result, it is easier to create different application serving the same underlying goal. Open source applications tend to live on even if their maintainers leave the developments.
FOSS is in the disadvantage while users are making a choice between FOSS and commercial software. First at all, since FOSS is not priced as commercial is, users tend to think that the FOSS is worthless as FOSS is given away for free. Secondly, users tend to expect the commercial software producer to improve its software for a while to continue to make money, and they think the future for open source software is less certain. Finally, since users are not making monetary investment in the software, they are less attached to the FOSS software and more likely to access superficial features of the software. Thus, the FOSS developer must provide the following:
”make a compelling, intuitive, easy-to-use first-run experience so users stick around long enough to learn how to use the software.”
“…tools that help users explore and understand the many alternatives.”

The 2nd experience is the “new workflows”, and this experience consists of 3 modes, namely “the temporary domain expert”, “A La Carte Computing, and the Trivial Use of Fat Apps”:
“The temporary domain expert becomes enough of an expert in a domain to solve a given problem, then moves on to solve other problems.”
“In this mode (A La Carte Computing) of working, what is important is the task, not the tools. In this view of computing, people want to get a job done, they want to get it done fast, and they don’t care what tools are required to complete the task.”
“In this case (The Trivial Use of Fat Apps), users under-utilize a powerful, sophisticated application to accomplish a very small task.”

The 3rd experience is “Software as a communication medium” where FOSS is quickly becoming a delivery medium for information, education, and ideas that are interactive:
There are several design implications for this use of FOSS. First, there is a need to be able to quickly install applications so users can focus on the concepts being communicated. Second, there is a need for sandboxes to run untrusted third-party code. Sandboxes have received limited attention in the FOSS community, but will be of increasing importance to support this practice and others, such as collaborative work.

The 4th experience is “Increased collaboration”:
Independent of web apps and applications with built-in support for collaboration, software released under free/open source licenses will lead to greater collaboration between individuals. This increased collaboration is made possible by the ability to freely redistribute a common toolset for performing work.


REFERENCE
[1] http://portal.acm.org/citation.cfm?doid=1753326.1753576
[2] http://www.cs.waikato.ac.nz/~daven/docs/oss-wp.pdf
[3] http://hackingusability.wordpress.com/2010/02/26/how-open-licenses-affect-user-experience/

Wednesday, June 16, 2010

Summary for readings on June16th

The topic of reading set on June 16th is on “Usability and Open Source”. [1] is an initial proposal of HCI proposal to the OSS community. HCI offers methods to agree on processes, an appreciation of principles, standards, guidelines within which to develop interfaces that human will have to use.
Human Computer Interaction (including usability engineering and interaction design) offers open source software development at least three important opportunities:
to create systems that are usable by ordinary people
to assure that OSS behaves consistently no matter who contributes to it, and
to respect the needs of disabled users as we standardize visual and interaction interface elements to meet accessibility requirements
[2] claims that OSS’s have poor user interfaces. [2] has stated the challenges within the a number of OSS projects, namely NetBeans, GNOME, and OpenOffice.org. These OSS project are sharing the similar challenges such as communication problem between the geographically distributed OSS project developers. The goal of the HCI professionals is the following:
[…] defining and integrating a suitable usability methodology into open source processes should be the first priority. It is also vital for each project to agree on its target audience, and to specify a clear and preferably centralized decision-making process.

[3] has stated 9 reasons why OSS has poor use interface:

1. Dedicated volunteer interface designers appear to be much rarer than their paid counterparts — and where they do exist, they tend to be less experienced (like yours truly).
2. First corollary: Every contributor to the project tries to take part in the interface design, regardless of how little they know about the subject. And once you have more than one designer, you get inconsistency, both in vision and in detail. The quality of an interface design is inversely proportional to the number of designers.
3. Second corollary: Even when dedicated interface designers are present, they are not heeded as much as they would be in professional projects, precisely because they’re dedicated designers and don’t have patches to implement their suggestions.
4. Many hackers assume that whatever Microsoft or Apple do is good design, when this is frequently not the case. In imitating the designs of these companies, volunteer projects repeat their mistakes, and ensure that they can never have a better design than the proprietary alternatives.
5. Volunteers hack on stuff which they are interested in, which usually means stuff which they are going to use themselves. Because they are hackers, they are power users, so the interface design ends up too complicated for most people to use.
6. The converse also applies. Many of the little details which improve the interface — like focusing the appropriate control when a window is opened, or fine-tuning error messages so that they are both helpful and grammatical — are not exciting or satisfying to work on, so they get fixed slowly (if at all).
7. As in a professional project, in a volunteer project there will be times when the contributors disagree on a design issue. Where contributors are paid to work on something, they have an incentive to carry on even if they disagree with the design. Where volunteers are involved, however, it’s much more likely that the project maintainer will agree to add a user preference for the issue in question, in return for the continued efforts of that contributor. The number, obscurity, and triviality of such preferences ends up confusing ordinary users immensely, while everyone is penalized by the resulting bloat and reduced thoroughness of testing.
8. For the same reason — lack of monetary payment — many contributors to a volunteer project want to be rewarded with their own fifteen pixels of fame in the interface. This often manifests itself in checkboxes or menu items for features which should be invisible.
9. The practice of releasing early, releasing often frequently causes severe damage to the interface. When a feature is incomplete, buggy, or slow, people get used to the incompleteness, or introduce preferences to cope with the bugginess or slowness. Then when the feature is finished, people complain about the completeness or try to retain the preferences. Similarly, when something has an inefficient design, people get used to the inefficiency, and complain when it becomes efficient. As a result, more user preferences get added, making the interface worse.

[4] begins with 2 of Free Software’s usual problems:
1. Weak incentives for usability
Solution: Solutions: Establish more and stronger incentives. For example, annual Free Software design awards could publicize and reward developers for good design.
2. Few good designers
Solutions: Provide highly accessible training materials for programmers, and volunteer designers, to improve the overall level of design competence.
3. Design suggestions often aren’t invited or welcomed.
Solution: Establish a process for usability specialists to contribute to a project.
4. Usability is hard to measure.
Solutions: Promote small-scale user testing techniques that are practical for volunteers.
5. Coding before design.
Solution: Pair up designers with those programmers wanting to develop a new project or a new feature.
6. Too many cooks.
Solution: Projects could have a lead human interface designer, who fields everyone else’s suggestions, and works with the programmers in deciding what is implementable.
7. Chasing tail-lights.
Solution: Encourage innovative design through awards and other publicity.
8. Scratching their own itch.
Solutions: Establish a culture of simplicity, by praising restrained design and ridiculing complex design.
9. Leaving little things broken.
Solution: When scheduling bug fixes, take into account how long they will take, possibly scheduling minor interface fixes earlier if they can be done quickly.
10. Placating people with options.
Solution: Strong project maintainers and a culture of simplicity.
11. Fifteen pixels of fame.
Solutions: Provide alternative publicity, such as a Weblog, for crediting contributors.
12. Design is high-bandwidth, the Net is low-bandwidth.
Solutions: Develop and promote VoIP, video chat, virtual whiteboard, sketching, and animation software that allows easier communication of design ideas over the Internet.
13. Release early, release often, get stuck.
Solution: Publish design specifications as early as possible in the development process, so testers know what to expect eventually.
14. Mediocrity through modularity.
Solution: Design an example graphical interface first, so that interface requirements for the lower levels are known before they are written.
15. Gated development communities.
Solutions: Free Software system vendors can coordinate cross-component features like this, if they have employees working on all relevant levels of the software stack.

[5] states the following questions the GUI application developer for Linux or BSD should ask to herself:
1. What does my software look like to a non-technical user who has never seen it before?
2. Is there any screen in my GUI that is a dead end, without giving guidance further into the system?
3. The requirement that end-users read documentation is a sign of UI design failure. Is my UI design a failure?
4. For technical tasks that do require documentation, do they fail to mention critical defaults?
5. Does my project welcome and respond to usability feedback from non-expert users?
6. And, most importantly of all...do I allow my users the precious luxury of ignorance?

[6] claims that OSS should be task-oriented instead of feature-oriented. A task-oriented configurator would have logic in it like this:
•If the machine doesn't have an active LAN interface, gray out all the "Networked" entries.
•If the machine has no device connected to the parallel port and no USB printers attached, gray out the "Locally connected" entry.
•If probing the hosts accessible on the LAN (say, with an appropriately-crafted Christmas-tree packet) doesn't reveal a Windows TCP/IP stack, gray out the SMB entry.
•If probing the hosts accessible on the LAN doesn't reveal a Novell Netware NCP stack, gray out the NCP entry.
•If probing the hosts accessible on the LAN doesn't reveal a Jet-Direct firmware TCP/IP stack, gray out the JetDirect entry.
•If all Unix hosts on the LAN have CUPS daemons running, gray out the LPD entry.
•If the preceding rules leave just one choice, so inform the user and go straight to the form for that queue type.
•If the preceding rules leave no choices, complain and display the entire menu.


REFERENCE
[1] http://doi.acm.org/10.1145/506443.506666
[2] http://doi.acm.org/10.1145/985921.985991
[3] http://web.archive.org/web/20030201183139/http://mpt.phrasewise.com/discuss/msgReader$173
[4] http://mpt.net.nz/archive/2008/08/01/free-software-usability
[5] http://catb.org/~esr/writings/cups-horror.html
[6] http://catb.org/~esr/writings/luxury-part-deux.html
[7] http://daringfireball.net/2004/04/spray_on_usability

Monday, June 14, 2010

Summary for readings on June 14th

[1] describes the social behavior in OSS team and development of the OSS browser which helps to observe the social network in OSS team.
A lot of studies focus static accounts of OSS community organization which views OSS community as a role hierarchy. Starting from the top of the hierachy, there are core developers, maintainers, patchers, bug reporters, documenters and then users. Core developers often have the smallest population but hold very important role in the community:
Indeed several empirical studies have found that, in a large majority of cases, a small core group is often responsible for a large proportion of the work accomplished and a very large group of peripheral participants is responsible for the remainder.

However, there are still less studies on volunteer's motivation to participate the OSS project while there are more studies in the OSS community structures. Motivation starts from socialization of newcomers, which is an important ingridient to keep the project going. OSS community relies on online tools as "a central source of power and influence". However, the centralized online tools would face the follwoing problems:
First, and especially for those interested in using qualitative methods, it is extremely easy to fall prey to data overload. Indeed the number of messages exchanged in Open Source mailing lists is in the order of hundreds, frequently thousands of messages per week....Second, OSS research material can be quite opaque. Despite their centrality in the Open Source development process, tools such as CVS databases, for example, produce few immediately analyzable outputs to the untrained eye.

To analyse OSS project, the author has started OSS Project Browser project to study and analyse OSS movement. The author has proposed 2 theorietical attributes:
(1) The software must make the hybrid nature of a project visible by showing
the connections not only between people, but also between people and material artifacts.
(2) The software must offer a dynamic perspective on activities and allow observations over time."
The OSS Project Browser should facilitate the ethnographic observation of a OSS project:
(3) It must offer both aggregate views (to avoid data overload and facilitate the selection of interesting episodes of activity) and at the same time preserve access to the raw, untouched research material for qualitative analysis.
(4) Since my interest here is in the socialization of newcomers, there must be ways to track a participant’s trajectory easily.

The author experiments the OSS Project Browser with Python, and the following description explains how the OSS Project Browser works:
The topmost pane (1) is a graphical representation of (borrowing Latour’s (1987b) terminology) the hybrid network for a particular OSS project. Black dots are individual participants in the project. A black line connects participants if they have both responded to each other over email. The more they reciprocated, the shorter the line (as in Sack, 2001). Another important part of this representation consists of the artifacts for this project, namely software code. Code is represented as blue rectangles. When an individual contributes code to the project, he is connected to the corresponding artifact with a blue line. The more he contributed to this particular artifact, the shorter the line.

From the study in “Fred” case, the process to reach the status of developer in Python involves the following steps:
(1) peripheral monitoring of the development activity;
(2) reporting of bugs and simultaneous suggestions for patches;
(3) obtaining CVS access and directly fixing bugs;
(4) taking charge of a ‘‘module size’’ project;
(5) developing this project, gathering support for it, defending it publicly;
(6) obtaining the approval of the core members and getting the module integrated into the project’s architecture.

Identity establishing is important for one to become a developer:
First, one can actively contribute to PEPs and features discussion….Second, one can submit bug reports and, simultaneously (this is important), a proposed solution to fix these bugs.

The OSS browser can help the new participant in the following steps:
(1) A period of ‘‘lurking’’ to assimilate the project’s culture and identify the areas in need of new contributions.
(2) Enrollment of key allies in support of future work.

To summarize this paper, author lists the following limitations in Open Source research:
Open Source projects are dynamic entities, yet most of the current research has produced only static accounts of their activity.
Open Source projects are hybrid, multi-sited environments composed of a network of human and material artifacts, yet these dimensions are often considered in isolation.
The massive amounts of research data available tend to favor aggregate statistical analysis to the detriment of more qualitative, in-depth analysis of the activity in a project.
Open Source productions are often difficult to understand for non-developers; accessing and processing some of this data (e.g. CVS records) requires technical knowledge.

The author proposed 2 frameworks to understand sodialization in Open Source projects:
(1) as an individual learning process based on the construction of identities, and
(2) as a political process involving the recruitment and transformation of human and material allies.

In conclusion, the new comer needs to earn his identity and trust from the core developer in order to gain the full access to modify the source code. I think this makes sense in the way that it is logically the new comer needs to go through all the process to add one piece of code in the source code. In this way, the new code is bullet-proof and this will reduce the rate or error in the source code.

REFERENCE
[1] Ducheneaut, N. 2005. Socialization in an Open Source Software Community: A Socio-Technical Analysis. Comput. Supported Coop. Work 14, 4 (Aug. 2005), 323-368. http://dx.doi.org/10.1007/s10606-005-9000-1

Thursday, June 10, 2010

Summary for readings on June 9th

[1] [2] studies the following areas:
1. why individuals participate
2 resources and capabilities supporting development activities
3. how cooperation coordination, and control are realized in projects
4. alliance formation and inter-project social networking
5. FOSS as a multi-project software ecosystem
6. FOSS as a social movement

Free software generally appears licensed with the GNU General Public License (GPL), while OSS may use either the GPL or some other license that allows for the integration of software that may not be free software. Free software can be seen as a social movement, whereas OSS is just a software development methodology,...

Participates join the FOSSD projects for various reasons:
Sometimes they may simply see their effort as something that is fun, personally rewarding, or provides a venue where they can exercise and improve their technical skill or competence in a manner that may not be possible within their current job or line of work....building trust and reputation, achieving "geek fame", being creative, advancing through evermore challenging technical roles,...

The following are the resources needed to help make FOSS efforts more likely to succeed:
Personal software development resources: Volunteers participates the project via Internet and they can bring their development methodologies to the project.
Beliefs supporting FOSSD: Beliefs are freedom of experession and freedom of choice.
in FOSS projects, these additional freedoms are expressed in choices for what to develop or work on , how to develop it, and what tools to employ. They also are expressed in choices for when to release work products, determining what to review and when, and expressing what can be said to whom with or without reservation.

FOSSD informalisms: the information the participants use to describe, proscribe, or presecribe what is happening in a FOSSD project.
(i) communications and messages within project Email lists, (ii) threaded message discussion forums,
bulletin boards, or group blogs, (iii) news postings, (iv) project digests, and (v) instant messaging or Internet relay chat. They also include (vi) scenarios of usage as linked Web pages, (vii) how-to guides, (viii) to-do lists, (ix) FAQs, and other itemized lists, and (x) project Wikis, as well as (xi) traditional system documentation and (xii) external publications. FOSS (xiii) project property licenses are documents that also help to define what software or related project content are protected resources that can subsequently be shared, examined, modified, and redistributed. Finally, (xiv) open software architecture diagrams, (xv) intra-application functionality realized via scripting languages like Perl and PHP, and the ability to either (xvi) incorporate plug-in externally developer software modules, or (xvii) integrate software components, modules, or scripts from other OSSD efforts

Competently skilled, self-organizing and self-managed FOSS developers: Volunteers require a base of prior experience in constructing open systems and experience in project management fools.
Discretionary time and effort of FOSS developers: Participants contribute their time and effort to the project because of various reasons:
self-determination, peer recognition, project affiliation or identification, and self-promotion, as well as belief in the inherent value of free software.

Trust and social accountability mechanisms: Developing FOSS source code and applications requires trust and accountability among project participants.
Software version control tool is required to resolve conflicts in the course of FOSS development, and this tool has the following elements:
(a) a centralized mechanism for coordinating and synchronizing FOSS development...
(b) an online venue for mediating control over what software enhancements, extensions, or architectural revisions will be checked-in and made available for check-out throughout the decentralized project as part of the publicly released version

Many FOSSD projects are interdependent through the networking of software developers, development artifacts, common tools, shared Web sites, and computer-mediated communications.

REFERENCE

[1] Scacchi, W. 2007. Free/open source software development. In Proceedings of the the 6th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (Dubrovnik, Croatia, September 03 - 07, 2007). ESEC-FSE '07. ACM, New York, NY, 459-468. http://doi.acm.org/10.1145/1287624.1287689

[2] Scacchi, W. 2007. Free/open source software development: recent research results and emerging opportunities. In the 6th Joint Meeting on European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering: Companion Papers (Dubrovnik, Croatia, September 03 - 07, 2007). ESEC-FSE companion '07. ACM, New York, NY, 459-468. http://doi.acm.org/10.1145/1295014.1295019

[3] Ye, Y. and Kishida, K. 2003. Toward an understanding of the motivation Open Source Software developers. In Proceedings of the 25th international Conference on Software Engineering (Portland, Oregon, May 03 - 10, 2003). International Conference on Software Engineering. IEEE Computer Society, Washington, DC, 419-429.

Wednesday, June 2, 2010

Summary for readings on June 2nd

Reading theme of June 2nd is about processes of Apache project and Mozilla project.
[2] is is about Apache project. Since the Apache Group members are working in geographically decentralized work spaces, all information on Apache Group is recorded three archival sources of data, namely Developer email list (EMAIL), Concurrent Version Control archive (CVS), and Problem Reporting Database (BUGDB).
Apache Project team began by solve the process issues first before the development began, because:
"..it was clear from the very beginning that a geographically distributed set of volunteers, without any traditional organizational ties, would require a unique development process in order to make decisions."

Apache Group member sets requirement for people who are interested in Apache developement:
"Each Apache Group (AG) member is volunteer who contributed for extended period of time..., and are nominated for membership and then voted on by existing members."

The AG member has the privrige to "vote on the inclusion of any code change, and has write access to CVS". A "core developer" is active in 4-6 developments in any given week. Each Apache developer iterate the following action through the course of Apache development:
"...discovering that a problem exists, determining whether a volunteer will work on it, identifying a solution,k developing and testing the code within wheir local copy of the course, presenting the code changes to the AG for review, and committing the code and documentation to the repository"

When an AG member has discovered a problem in the development, the AG member can report it in EMAIL, BUGDB and the USENET Apache newsgroups. Then, developer, who has experience to specific type of problem, will be a volunteer to work on the problem. Noted that "developers tend to work on problems that are identified with areas of the code they are most familiar." The Apache is divided into parts and the developers are assigned the "code ownership" of parts of the server that they are know to have created or to have maintained consistently. Developers can forward their possible solutions to the mail list for group to review in order to develop the solution. Once solution is identified, the developer tests and makes the changes to the Apache source code. Noted that “…all of the core developers are responsible for reviewing the Apache-CVS mailing list to ensure that the changes are appropriate.”
[3] has also mentioned Apache releases management as following:
“When the project nears a product release, one of the core developers volunteers to be the release manager, responsible for identifying the critical problem that prevent the release, determining when those problems have been repaired and the software has reached a stable point, and controlling access to the repository so that developers don’t inadvertently change things that should not be changed just prior to the release.”

[3] mentions that Mozilla Project maintains a roadmap document that specifies what will be included in future releases, as well as dates for which releases are scheduled. Mozilla applies Bugzilla problem-reporting mechanism for bug reporting and enhancement request process. Bugzilla allows bug reporters to see the most recent bugs to avoid bug report duplication. The development community can browse Bugzilla to identify bugs or enhancements they would like to work on. Fixes are often submitted as attachments to Bugzilla problem reports.

Both [2] and [3] are trying to find quantitative results for the following questions to understand how the Apache/Mozilla came to exist:
Q1: What was the process used to develop Apache/Mozilla?
Q2: How many people wrote code for new Apache functionality? How many people reported problems? How many people repaired defects?
Q3: Were these functions carried out by distinct groups of people? Did large numbers of people participate somewhat equally in these activities, or did a small number of people do most of the work?
Q4: Where did the code contributor work in the code? Was strict code ownership enforced on a file or module level?
Q5: What is the defect density of Apache code?
Q6: How long did it take to resolve problems? Were higher priority problems resolved faster than low priority problems? Has resolution interval decreased over time?

With the quantitative results, the Apache project and Mozilla project shares a number of hypotheses:
“Hypotheses 1: Open source development will have a core of developers who control the code base. This core will be no larger than 10-15 people, and will create approximately 80% or more of the new functionality.”
Hypothesis 2: For projects that are so large that 10-15 developers cannot write 80% of the code in a reasonable time frame, a strict code ownership policy will have to be adopted to separate the work of additional groups, creating, in effect, several related OSS projects.
Hypothesis 3: In successful open source developments, a group larger by an order of magnitude than the core will repair defects, and a yet larger group (by another order of magnitude) will report problems.
Hypothesis 4: Open source developments that have a strong core of developers but never achieve large numbers of contributors beyond that core will be able to create new functionality but will fail because of a lack of resources devoted to finding and repairing defects in the released code.
Hypothesis 5: Defect density in open source releases will generally be lower than commercial code that has only been feature-tested, i.e., received a comparable level of testing.
Hypothesis 6: In successful open source developments, the developers will also be users of the software.
Hypothesis 7: OSS developments exhibit very rapid responses to customer problems.

The above hypotheses are for both Apache project and Mozilla project except there are alternative hypothesis 1 and hypothesis 2 for Mozilla project as specified in [3]:
Hypothesis 1a: Open source developments will have a core of developers who control the code base, and will create approximately 80% or more of the new functionality. If this core group uses only informal, ad hoc means of coordinating their work, it will be no larger than 10-15 people.
Hypothesis 2a: If a project is so large that more than 10-15 people are required to complete 80% of the code in the desired time frame, then other mechanisms, rather than just informal, ad hoc arrangements, will be required in order to coordinate the work. These mechanisms may include one or more of the following: explicit development processes, individual or group code ownership, and required inspections.

Finally, it is feasible to perform “hybridized process of commercial and OSS practices”. Personally, I think the OSS process can be applied to the commercial software, but the restriction and closeness of commercial software team might reduce the effectiveness of OSS development.

REFERENCE
[1] The Cathedral and the Bazaar (Raymond)

[2] Mockus, A., Fielding, R. T., and Herbsleb, J. 2000. A case study of open source software development: the Apache server. In Proceedings of the 22nd international Conference on Software Engineering (Limerick, Ireland, June 04 - 11, 2000). ICSE '00. ACM, New York, NY, 263-272. http://doi.acm.org/10.1145/337180.337209

[3] Mockus, A., Fielding, R. T., and Herbsleb, J. D. 2002. Two case studies of open source software development: Apache and Mozilla. ACM Trans. Softw. Eng. Methodol. 11, 3 (Jul. 2002), 309-346. http://doi.acm.org/10.1145/567793.567795

Monday, May 31, 2010

Summary for readings on May 31st

The reading set in May 31st is about the software evolved from workstation application to web application as we are moving from Web 1.0 to Web 2.0. All four articles were written by O'Reilly.

[1] is explaining how open source is an important factor to the computer-using community. [1] has pointed the fact that people are buying computer for the interest to access the "information application" or "infoware", not for the built-in applications on the computer:
What's interesting is that the killer application is no longer a desktop productivity application or even a back-office enterprise software system, but an individual web site. And once you start thinking of web sites as applications, you soon come to realize that they represent an entirely new breed, something you might call an "information application," or perhaps even "infoware."

The Common Gateway Interface (CGI) enables the web-based application to service web user:
CGI defines a way for a web server to call any external program and return the output of that program as a web page.
CGI programs may simply be small scripts that perform a simple calculation, or they may connect to a full-fledged back-end database server.

O'Reilly has stated that Open Source makes the Web/Internet possible, because the Internet infrastructures were developed through the open-source process and rely on open source software:
more than 50% of all visible web sites are served by the open-source Apache web server. The majority of web-based dynamic content is generated by open-source scripting languages such as Perl, Python,..."

Open-sourced languages and scripts such HTML and Perl also hold important roles in web application program, because they are freely shared between developer and easy to make modification. While proprietary software manufacturers such as Microsoft sets higher barriers to enter computer business, the open source software lowers the barriers:
You can try a new product for free--and even more than that, you can build your own custom version of it, also for free. Source code is available for massive independent peer review. If someone doesn't like a feature, they can add to it, subtract from it, or reimplement it. If they give their fix back to the community, it can be adopted widely very quickly.


[2] is a article by O'Reilly to clearify "Web 2.0". The application for Web 2.0 treats web as a platform. The following Web 2.0 principles are introduced in the beginning of [2]:

"The value of the software is proportional to the scale and dynamism of the data it helps to manage."
"Leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head."
"The service automatically gets better the more people use it."

The following are central principles which enable some Web 2.0 applications to survive from Web 1.0 Era:

* Hyperlinking is the foundation of the web. As users add new content, and new sites, it is bound in to the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the web of connections grows organically as an output of the collective activity of all web users.
* Yahoo!, the first great internet success story, was born as a catalog, or directory of links, an aggregation of the best work of thousands, then millions of web users. While Yahoo! has since moved into the business of creating many types of content, its role as a portal to the collective work of the net's users remains the core of its value.
* Google's breakthrough in search, which quickly made it the undisputed search market leader, was PageRank, a method of using the link structure of the web rather than just the characteristics of documents to provide better search results.
* eBay's product is the collective activity of all its users; like the web itself, eBay grows organically in response to user activity, and the company's role is as an enabler of a context in which that user activity can happen. What's more, eBay's competitive advantage comes almost entirely from the critical mass of buyers and sellers, which makes any new entrant offering similar services significantly less attractive.
* Amazon sells the same products as competitors such as Barnesandnoble.com, and they receive the same product descriptions, cover images, and editorial content from their vendors. But Amazon has made a science of user engagement. They have an order of magnitude more user reviews, invitations to participate in varied ways on virtually every page--and even more importantly, they use user activity to produce better search results. While a Barnesandnoble.com search is likely to lead with the company's own products, or sponsored results, Amazon always leads with "most popular", a real-time computation based not only on sales but other factors that Amazon insiders call the "flow" around products. With an order of magnitude more user participation, it's no surprise that Amazon's sales also outpace competitors.


Another notable features of Web 2.0 are the blog which is a "dynamic websites" in diary format and RSS which allow the blog viewers to subscribe to the blog.

Every Web 2.0 internet application has a database managed by SQL The data is the core of Web 2.0 application, and application owner such as Amazon need to do the following to compet in the marcket:

Amazon relentlessly enhanced the data, adding publisher-supplied data such as cover images, table of contents, index, and sample material. Even more importantly, they harnessed their users to annotate the data, such that after ten years, Amazon, not Bowker, is the primary source for bibliographic data on books, a reference source for scholars and librarians as well as consumers. Amazon also introduced their own proprietary identifier, the ASIN, which corresponds to the ISBN where one is present, and creates an equivalent namespace for products without one. Effectively, Amazon "embraced and extended" their data suppliers.

Google Maps acts as a data recource and provide data to other applications:
We expect to see battles between data suppliers and application vendors in the next few years, as both realize just how important certain classes of data will become as building blocks for Web 2.0 applications.

Two fundamental changes in the business model in Web 2.0 era:
1. Operations must become a core competency.
2. Users must be treated as co-developers.

Web 2.0 applications are developed in lightweight programming models:
1. Support lightweight programming models that allow for loosely coupled systems.
2. Think syndication, not coordination.
3. Design for "hackability" and remixability.

[2] summarizes the core competencies of Web 2.0 companies:

* Services, not packaged software, with cost-effective scalability
* Control over unique, hard-to-recreate data sources that get richer as more people use them
* Trusting users as co-developers
* Harnessing collective intelligence
* Leveraging the long tail through customer self-service
* Software above the level of a single device
* Lightweight user interfaces, development models, AND business models

[3] states that "searching" is one great challenge of the Internet OS era, but searching needs a lot of effort:
Cracking the search problem requires massive, ongoing crawling of the network, the construction of massive indexes, and complex algorithmic retrieval schemes to fin the most appropriate results for a user query.

Internet operating system must provide access to various type of media, and these media types requires common technology infrastructure: "access control", "caching", and "instrumentation and ayalytics".

Article [1] and [2] have shown a trand the software application is evolving from workstation-dependent application to web application. Article [3] and [4] have shown that the Web/Internet has started to take the role as a operating system while the other companys are offering on-line services to the Internet user. The data on web is not owned and controled by a specific individual organization, but it can be maintained and shared by a body of Internet users. Other hardware/software applications are built to utilize this data. In the Web 2.0 era, Web-as-operating-system architecture is slowly taking over the workstation-dependent-operating system architecture.

Wednesday, May 26, 2010

Summary for readings on May 26th

The reading set on May 25th is about varieties of major open source projects. Since I will lead the talk in BSD open source project, the focus of this summary is primarily on BSD. "Twenty Years of Berkeley Unix" from the reading set is a history of BSD project development, and it also explains how different BSD distribution were originated:

• In 1973, Thompson and Ritchie from AT&T’s Bell Labs presented the first Unix paper at the Symposium on Operating Systems Principles at Purdue University. At the Symposium, Bob Fabry, a professor from the University of California at Berkeley, became interested in Unix. So, he requested to obtain a copy to experiment in Berkeley

• In January 1974, the first Unix installation done at Berkeley was on PDP-11/45 machine with Version 4 of Unix.

• In early 1977, Bill Joy put together the first "Berkeley Software Distribution" (BSD), and this distribution included the Pascal system and “ex” editor.

• By mid-1978, the “Second Berkeley Software Distribution” (2BSD) was out, and it included the enhanced Pascal System, “vi” edtor, and termcap. Termcap was a function allowed to consolidate the screen management by using a small interpreter to redraw the screen.

• In early 1978, Berkeley purchased a VAX machine, obtained Ver. 7 Unix from the Bell Labs, and installed Ver. 7 Unix in VAX machine. However, Ver. 7 Unix did not take advantage of VAX’s virtual memory capability.

• By January 1979, 32/V was written with virtual memory functionality and replaced Ver. 7 Unix installed on VAX. Thus, Ozalp Babaoglu wrote the virtual memory functionality was written to support Ver. 7 Unix

• By the end of 1979, software such as the Pascal, vi editor, the C shell, and the smaller programs from the 2BSD distribution were ported to VAX, and this combination resulted the “Third Berkeley Software Distribution” (3BSD). Basically, 3BSD is 2BSD with virtual memory function. Noted that, Bell Labs decided to commercialize the future Unix versions and could no longer support Unix research within the Unix community. So, Berkeley quickly stepped into the role to support the Unix community’s further research.

• In the fall of 1979, Fabry made a proposal to the Defense Advanced Research Projects Agency (DARPA) about writing an enhanced version of 3BSD for the use of DARPA community

• By April 1980, Fabry began to lead CSRG to work on the DARPA’s project.

• By October 1980, 4BSD was produced. 4BSD included Pascal compiler, the Franz Lisp system, and an enhanced mail handling system, and it supported auto reboot, a 1K block file system.

• By June 1981, 4.1BSD was released. 4.1BSD was the tuned-up version of 4BSD with addition of auto configuration code. Noted that CSRG’s original intent was to call 4.1BSD the “5BSD”, but it was refused by AT&T due to customer confusion between their commercial Unix release "System V" and "5BSD".

• In April 1982, 4.1aBSD was released for internal use only. 4.1aBSD was integrated with TCP/IP protocols. There were several new applications allowing the local users to access to remote resources over the network.

• In June 1982, 4.1bBSD was produced by the implementation of the new file system fully integrated into the 4.1a kernel.

• In April 1983, 4.1cBSD was released.

• In August 1983, 4.2BSD was released. The 4.2BSD was improved from 4.1cBSD with the following modification:
- The new signal facilities
- Networking support
- Standalone I/O system to simplify the installation process
- Integrating the disc quota facilities
- Documentation updated
- Tracking the bugs from the 4.1c release

• In June 1985, 4.3BSD release was announced at the Usenix conference. The 4.3BSD distribution was a tuned version of 4.2cBSD. However, the release plans were halt by BBN, because BBN was complaining that Berkeley never updated 4.2BSD with the final version of their networking code. BBN claimed that Berkeley should replace the TCP/IP in 4.3BSD with BBN implementation. (NOTE: don’t really understand the relationship between BBN and Unix). As a result, the 4.3BSD distribution included both Berkeley and BBN implementations. DARPA decided to continue to use Berkeley implementation install of BBN implementation, because research showed that Berkeley implementation worked more efficiently. 4.3BSD was finally released in June 1986.

• In June 1998, 4.3BSD-Tahoe was released. A promising computer model of the time call the Power 6/32 was to replace the aging VAX machines. BSD kernel was splitting into machine-dependent and machine-independent parts. This resulted 4.3BSD-Tahoe distribution. Splitting the kernel into machine-dependent and machine-independent parts was an important step to allow BSD to be ported to numerous other architectures. Users of 4.3BSD-Tahoe needed to get an AT&T source license, because

• In June 1989, Networking Release 1 of BSD was out, and it was the first freely-redistributable BSD from Berkeley. Before Networking Release 1 existed, the users always have to first get an AT&T source license. BSD user requested that Berkeley break out the networking code and utilities and provide them under licensing terms that did not require an AT&T source license. Its licensing terms gave Networking Release 1 like free software:

The licensee could release the code modified or unmodified in source or binary form with no accounting or loyalties to Berkeley.
Although Berkeley charged a $1,000 fee to get a tape, anyone was free to get a copy from anyone who already had received it (that tape). Indeed, several large sites put it up for anonymous ftp shortly after it was released.”


• In early 1990, 4.3BSD-Reno was released. 4.3BSD-Reno used the virtual memory system from MACH operating system done at Carnegie-Mellon University. The other major addition to the system at the time was a Sun-compatible version of the Network Filesystem (NFS).

• In June 1991, Networking Release 2 was a freely-redistributable expansion included more BSD code which was not under AT&T source license. CSRG re-wrote the Unix utilities from scratch based solely on their published description. However, Networking Release 2 did not include 6 kernel files that cloud not be trivially rewritten.

• 6 months later after the release of Networking Release 2, Bill Jolitz improved Networking Release 2 with written replacements for the 6 missing files and called it 286/BSD. He put it up for anonymous FTP and let anyone to download it for free. However, Jolitz could not keep up the debugging of 386/BSD, and thus, the 386/BSD users formed NetBSD group to pool their collective resources to help maintain and enhance the 386/BSD. Their contribution to 386/BSD resulted the NetBSD distribution. The NetBSD group chose to emphasize the support of as many platforms as possible and continued the research style development done by the CSRG

• FreeBSD was another BSD group formed a few months after the NetBSD group, and they support just the PC architecture and to go after a larger and less technically advanced group of users. FreeBSD built elaborate installation scripts and began shipping their system on a low cost CD-ROM. In addition, FreeBSD supports Linux emulation mode that allows Linux binaries to run on FreeBSD platform.

• In the mid-90’s, OpenBSD spun off from the NetBSD group. Their technical focus was aimed at improving the security of the system. OpenBSD was selling CD-ROMs with many of the ease-of-installation ideas from the FreeBSD distribution.

• In January 1992, Berkeley Software Design Incorporated (BSDI) developed commercial version of Net Release 2 with 6 missing kernel and began selling the source code and binary file for $995. BSDI promoted the campaign with phone number, 1-800-ITS-UNIX. Short after, Unix System Laboratories (USL), a subsidiary of AT&T, sent a letter to (BSDI) and demanded that BSDI to stop promoting their product as Unix. Then BSDI stopped using the phone number and changed their advertisements explaining that BSDI’s product was not Unix. However, USL was still not satisfied and filed lawsuit to stop BSDI selling their product. At the hearing for the injunction, BSDI claimed that they should not hold responsible for files distributed by the University of California. The judge agreed with BSDI’s argument and told USL to restate their complaint based solely on six kernel files or he would dismiss the lawsuit. Then, USL decided to refill the lawsuit again both BSDI and the University of California to stop the distribution of Net Release 2. The result of the lawsuit was that 3 files were removed from the 18000 that made up Networking Release 2, and a number of minor changes were made to other files. In addition, the University agreed to add USL copyrights to about 70 files, although those files continues to be freely redistributed.

• In June 1994, 4.4BSD-Lite was released under terms identical to those used for the Networking releases. The term allow free redistribution in source and binary form subject only to the constraint that the university copyrights remain intact and that the University receive credit when others use the code. At the same time, 4.4BSD-Encumbered was also released which still required recipient to have USL source license.

• Since the lawsuit settlement also stipulated that USL would not sue any organization using 4.4BSD-Lite as the base for their system, BSDI, NetBSD, and FreeBSD had to restart their code base with the 4.4BSD-Lite source into which they then merge their enhancements and improvements.

• In 1995, 4.4 BSD-Lite Release 2 was released. Following the release of 4.4BSD-Lite Release 2, the CSRG was disbanded.

Wednesday, May 19, 2010

Summary for readings on May 19th

Reading set on May 19th is on BSD license, open source movement, and open source licenses. In [1], it mentions that, upon releasing a software under BSD license, the user must follow restrictions:


"one should not claim that they wrote the software if they did not write it and one should not sue the developer if the software does not function as expected or as desired."


In addition, a BSD license can also include a clause to restrict the use of the project name. [1] introduces the term derivative work and provides its definition as following:


"derivative work is a product that is based on, or incorporates, one or more already existing works."


It is obvious to see that the primary goal of BSD license is to protect the copyright of the derivative work by setting restriction on the users. One evidence to shows that BSD license is advantageous to proprietary software devlopers:


"BSD-style licenses do not require that derivative works based on BSD-licensed software make the source code for such derivative works freely available....This allows the direct incorporation of code from open source projects into closed source projects."


This defies fundamental principle of GPL in term of software sharing since GPL prohabits closure of source codes. So, BSD is serving software developers whom intend to derive closed source softwares from the existing open source software. One interesting point made in [1] is that the software developers tend to make their source eventually available, the author of [1] did not provide reason why the software developers would do so.

[1] indicates that the original BSD license contained an advertising clause to display software information and acknowledgment that the product "includes software develped by the University of California, Berkeley and its contributors". However, there are two problems with use of advertising clause:


1. "This (advertising clause) could easily result in large and cumbersome acknowledgments for products with numerous contributors and for software distributions consisting of multiple individual projects."
2. "...legal incompatibility with the terms of the GPL. This is because the GPL prohibits the addition of restrictions beyond those that it already imposes. Thus it was necessary to segregate GPL and BSD-licensed software within projects."

Therefore, this advertising clause has been taken out to avoid the above problems.

[2] is written by Eric Raymond. Raymond is the author of “A Brief History of Hackerdom” in 1996, an editor of the 1st edition of The New Hacker’s Dictionary in 1990, and is considered to be a hacker culture’s historian and resident ethnographer. He described his first encountering to Linux in late 1993 was a shocking experience. Raymond is surprised how Torvalds and his teams had put together Linux with more features exceeding the original Unix. In the following years, he has studied the methodology of how Torvalds and his team have succeeded and beaten the Brooks’s Law:


“…as your N number of programmers rises, work performed scales as N but complexity and vulnerability to bugs rises as N-squared.”


After the closed observation and experimenting, Raymond wrote “The Cathedral and the Bazaar” (CatB). Raymond’ CatB has inspired Netscape to decide the release of its browser source code to the public. Netscape has done so to compete against Microsoft’s Internet Explorer and Microsoft’s plan “to bend the Web’s protocols away from open standards and into proprietary channels that only Microsoft’s servers would be able to service”. Raymond then has helped Netscape to develop the Mozilla Public License and found the Mozilla organization.

The term “open source” is invented in a meeting with Raymond, Torvalds, and the others. Free software is to replace “free software”, as:


“It seemed clear to us in retrospect that the term "free software" had done our (hackers) movement tremendous damage over the years.”


In the “open source” campaign, Raymond has taken the role to promote “free software” movement in front of press. A few months after Netscape has released its source code, Oracle and Informix have also decided to support Linux. The Mircosoft’s “Halloween Documents” have created a new surge of interest in the open source phenomenon.

After reading through [2], I personally think Raymond’s free software campaign was a success. As the result of the campaign, he mentioned that “Netscape’s browser reversed its market-share slide and began to make gains against Internet Explorer”. Comparing to Stallman, the main ingredient of Raymond’s success is that Raymond has allied with Netscape in the competition against Microsoft, and, at the same time, he also promoted open source movement within the Netscape browser’s user community.

[3] is on the open source definition by Bruce Perens. He is the leader of the Debian project. He states the rights of programmers who contribute to Open Source:


“The right to make copies of the program, and distribute those copies.”
“The right to have access to the software's source code, a necessary preliminary before you can change it.”
“The right to make improvements to the program.”


Perens claims that free software is not a new concept. This concept is popularized by Stallman when he founded
Free Software Foundation and GNU project. The Open Source Definition includes many Stallman’s free software ideas. Raymond stated the idea for Open Source, because he is “concerned that conservative business people were put off by Stallman's freedom pitch” and “this was stifling the development of Linux in the business world”. Perens edited the Debian Free Software Guidelines to form the Open Source Definition.

As an example of a non-open software turns into open software by public demand, Petern specified the “KDE, Qt and Troll Tech” case:


“KDE applications were themselves under the GPL, but they depended on a proprietary graphical library called Qt, from Troll Tech. Qt's license terms prohibited modification or use with any display software other than the senescent X Window System”


This conflict ends when Troll eventually announced the release of open-source version of Qt.
One worth noting in The Open Source Definition (Version 1.0) is that author can give permission to modify the original source code. When the program is made available to the user, the source code can be attached to the program or the source code can be downloaded via the Internet. Modification can also take the form of patch files which leaves the source code unchanged.
[3] suggests uses of different licenses to author with different interests. For example, on the issue of whether to take the modification to the private or not:


“If you want to get the source code for modifications back from the people who make them, apply a license that mandates this. The GPL and LGPL would be good choices. If you don't mind people taking modifications private, use the X or Apache license.”


Or on the issue whether to allow merge the program with the proprietary software:


“If so (allow the merge with proprietary), use the LGPL, which explicitly allows this without allowing people to make modifications to your own code private, or use the X or Apache licenses, which do allow modifications to be kept private.”


It is interesting to see that variety of licenses is listed with different with feature in [3]. Petern does not really forbid use of other licenses outside of open source license, and he merely suggests use difference licenses to meet the authors’ needs. However, he does urge to choice a appropriate license instead of create a new license for the new software, because “fragments of one program cannot be used in another program with an incompatible license.”

[4] is written by Stallman. Stallman claims that it is a misinterpretation to refer “open source” as “free software” nowadays, but the philosophy of open source is not the same as the one of free software in the way that open source does not mention the freedoms of user as free software would describe. Stallman also objects to that idea that open source movement is “marketing campaign for free software”.


“The two terms (“open source” and “free software”) describe almost the same category of software, but they stand for views based on fundamentally different values. Open source is a development methodology; free software is a social movement. For the free software movement, free software is an ethical imperative, because only free software respects the users' freedom. By contrast, the philosophy of open source considers issues in terms of how to make software “better”—in a practical sense only.”


Stallman also describes open source is different to free software that:


“it (open source) is a little looser in some respects, so the open source people have accepted a few licenses that we consider unacceptably restrictive.”


Further more:


“The idea of open source is that allowing users to change and redistribute the software will make it more powerful and reliable. But this is not guaranteed. Developers of proprietary software are not necessarily incompetent. Sometimes they produce a program that is powerful and reliable, even though it does not respect the users' freedom.”


In Stallman’s view, free software values user freedoms and ethics more than open source does. Open source still wants to restrict user in sake of producing power and reliable software:


“They (leaders of open source) figured that by keeping quiet about ethics and freedom, and talking only about the immediate practical benefits of certain free software, they might be able to “sell” the software more effectively to certain users, especially business.”


[5] is a letter by Bruce Perens to Debian community and co-founder of Open Source Initiative. Perens was in Raymond’s team in the open source campaign. In the letter, Perens is showing the concern about schism of “free software” community and “open source” community. He feared that open source definition is moving away from the free software ideal and the regard of user’s freedoms to use software. In conclusion, Perens was showing that he “tended toward promotion of Free Software rather than Open Source”, because “Eric Raymond seems to be losing his free software focus.” Perens has being mentioned in a number of documents in this reading set, it is interesting how he changed his position between “Free Software” and “Open Source”.

REFERENCE
[1] BSD License
[2] OSV: The Revenge of the Hackers
[3] OSV: The Open Source Definition
[4] Why Open Source Misses the point of Free Software (FSF)
[5] Categories of Free and Non-Free Software (FSF)
[6] Talk about Free Software again (Bruce Perens)

Monday, May 17, 2010

Summary for readings on May 17th

Reading set on May 17th is about GNU Licenses. [1] introduces the new features in latest GPLv3. GPLv3 supports the original 4 freedoms of free software is stated as following:
* the freedom to use the software for any purpose,
* the freedom to change the software to suit your needs,
* the freedom to share the software with your friends and neighbors, and
* the freedom to share the changes you make.

[1]’s main purpose is to introduce new features in GPLv3. Besides the four freedoms for free software, GPLv3 also provides other mechnism to protect the software's copyleft from tivoization, laws prohibiting free software, and discriminatory patent deals. GPLv3 is shown to be compatible with other licenses to enable free software sharing between commuity using GPLv3 and other communities using other free software licenses. GPLv3 provides users both the object codes and the coorsponding source codes:

"...when you host object code on a web or FTP server, you can simply provide instructions that tell visitors how to get the source from a third-party server. Thanks to this new option, fulfilling this requirement should be easier for many small distributors who only make a few changes to large bodies of source."

One interesting change which has been mentioned in GPLv3 is that the term "convey" is used to replace and imply the term "distribution" from previous GPL verions. The change is made due to:

"...copyright laws in other countries use the same word, but give it different meanings. Because of this, a judge in such a country might analyze GPLv2 differently than a judge in the United States."

[2] is FAQ on GPL, and it inclouds basic user rights and caution of what not to do.
The following are a few pints which I found them interesting.
Individual or organization can use the modified GPLed codes internal without releasiing it to the public. When the modified code is released to the public, the following action should be taken:

"...if you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program's users, under the GPL."

Therefore, upon releasing a modefied GPLed software, both object code and source code should be made available to the public. GPL urges authors not to use non-free libraries/softwares in their new softwares:

"If it depends on a non-free library to run at all, it cannot be part of a free operating system such as GNU; it is entirely off limits to the Free World. "

If the software is already coded with non-free libraries/softwares, it is suggested that:

"please mention in the README that the need for the non-free library is a drawback, and suggest the task of changing the program so that it does the same job without the non-free library."

A number of points are mentioned in [2]. There is a clear boundary between free software and proprietary software/non-free software. Proprietary software/non-free software cannot be applied within the free software. GPL also requires the source code to be released to public upon releasing of a GPL'ed software, and this gives the software user the freedome to modify the software.
[3] expresses the FSF’s concern regarding the use LGPL for it permits library to be applied in proprietary programs. LGPL is another version of GPL. However, this document provides a case when it is appropreate to use LGPL:

"when a free library's features are readily available for proprietary software through other alternative libraries. In that case, the library cannot give free software any particular advantage, soit is better to use the Lesser GPL for that library."

Basicaly, it is okay when a new library whom has simiular functions as other libraries to be copyrighted under LGPL. This document also provides the case to copyright a software under GPL to benefit the free software community as whole:

"However, when a library provides a significant unique capability, like GNU Readline, that's a horse of a different color. ... Releasing it (Readline) under the GPL and limiting its use to free programs gives our community (free software community) a real boost."


Personally, I think [3] is not really suggesting software developers should abandon LGPL. Instead, it is suggesting a strategy to benetit free software developers more and still help proprietary software developers at minimum cost from the free software community.

REFERENCE
[1] A Quick Guide to the GPLv3 (FSF)
[2] GPL FAQ (FSF)
[3] Why shouldn't use LPGL (FSF)

Wednesday, May 12, 2010

Summary for Readings on May 12th

The theme of reading set on May 12th is on "Licensing".

[1] defines "freedom" of free software in depth, and software must be qualified with the following essential freedoms to be free:
* The freedom to run the program, for any purpose (freedom 0).
* The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this.
* The freedom to redistribute copies so you can help your neighbor (freedom 2).
* The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

The above four freedoms [1] are applied software in binary or executable forms for the user's benefits over the programmer's.

One interesting notion is that free software can be applied for commercial use, and the programmer is allowed to sell copies of their works. At the end of [1], FSF still shows concern that there are people who would misinterpret free software in FSF's respective. Personally, I think the general public is not to be blame for misinterpret free in term of monetary value, since people would consider something "free" as being given away with no monetary value. Perhaps, there would be a better term to replace "free" in order to explain free software principles to the general public in a better way.

[2] begins with a fictional story in the future where people are not allowed to share books and their actions on computers are monitored. Through this story, Stallman is trying to convincing readers that our society is gradually heading to that direction and becoming the reality as in the story. He supports his argument by stating laws have been proposed to restrict the user's right to read globally:

"In the US, the 1998 Digital Millenium Copyright Act (DMCA) established the legal basis to restrict the reading and lending of computerized books (and other works as well). The European Union imposed similar restrictions in a 2001 copyright directive. In France, under the DADVSI law adopted in 2006, mere possession of a copy of DeCSS, the free program to decrypt video on a DVD, is a crime."

In addition, Microsoft keeps signatures and encryption keys of personal computers with Vista in the effort of control these personal computers. The scheme was referred by Stallman as "treacherous computing" [2]. Microsoft is doing this to impose Digital Restrictions Management (DRM) on Vista users. To fight back, Stallman established the DefectiveByDesign.org campaign.

One interesting note is that SPA is the abbreviation of Software Protection Authority in Stallman's fictional story while it is actually referring to Software Publisher's Association in reality. Stallaman describes SPA, both in his story and in reality, as the security police entity which is threatening user's freedom to use his computer.

[3] states the original purpose of copyright is to benefit users, not for the sake of authors. Stallman address copyright in US laws as following:

"that copyright is not a natural right of authors, but an artificial concession made to them for the sake of progress."

Instead of allowing the authors and publishers to maximize profit by monopolizing their products in the market, Stallman specifies the purpose of copyright which is stated in US Constitution as following:

"...to provide an incentive for authors to write more and publish more. In effect, the government spends the public's natural rights, on the public's behalf, as part of a deal to bring the public more published works."

The "copyright bargain" decides the benefit is supposed to go to the reading public. However, Stallman has noticed a number of faults regarding the copyright law. The first problem, referred as "striking a balance" error, with copyright law spotted by Stallman is that the US copyright law is not equivalent to the copyright bargain as it is supposed to:

"it assumes that all kinds of interest in a policy decision are equally important. This view rejects the qualitative distinction between the readers' and publishers' interests which is at the root of the government's participation in the copyright bargain."

So, the copywrite law assumes reader's interest is equal to publisher's interest which is not as the same interpretation as it is in the copywrite bargain. The concesequences of copyright law's mis-interpretation places the publisher in a favourable position than it would to readers:

"The copyright bargain places the burden on the publishers to convince the readers to cede certain freedoms. The concept of balance reverses this burden, ..."

When public pays tax to the government, the government should buy something for the public with the best possible price. In term of copyright policy, Stallman states that the government spends public's freedom in the copyright bargain. Thus, the government should put public's interest on top of publisher's interest.

The second problem is "maximizing one output" means to maximize the number of published works. Stallman introduces the principle of "diminishing returns":

"The first freedoms we should trade away are those we miss the least, and whose sacrifice gives the largest encouragement to publication. As we trade additional freedoms that cut closer to home, we find that each trade is a bigger sacrifice than the last, while bringing a smaller increment in literary activity.

In another words, maximizing publication makes the reader sacrifice most of freedom to encouragement to publication. [Note: this point is quite vague]

The third problem is "maximizing publishers' power" where the publishers make copyright cover every imaginable use of a work. Stallman supports this argument with the following example:

"...Shakespeare borrowed the plots of some of his plays from works others had published a few decades before, so if today's copyright law had been in effect, his plays would have been illegal."

Feared by Stallman, the 3 problems in copyright give the publishers more power and control over the readers, even through the copyright is supposed to benefit the readers originally and to serve the public's interest. For example, this result introducing S. 483, a 1995 bill to increase the term of copyright by 20 years. Even Constitution's view has changed to be in favour to the publisher's interests more than the interests of the public. This shows in the meeting of Stallman with Congressman Barney Frank:

"...I (Stallman) asked him, “But is this in the public interest?” His (Frank) response was telling: “Why are you talking about the public interest? These creative people (industry) don't have to give up their rights for the public interest!..."

[4] is about "intellectual property". Intellectual property was first expressed by World Intellectual Property Organization founded in 1967. Stallman describes it as following:

'...to toss copyright, patents, and trademarks—three separate and different entities involving three separate and different sets of laws—into one pot and call it “intellectual property”'

Stallman says intellectual property causes confusion and generalizes about issues that are not relevant to each others.

Both [6] and [7] are relevant to "copyleft". Copyleft is a licensing mechanism to keep GNU software free in the public domain:

"Copyleft says that anyone who redistributes the software, with or without changes, must pass along the freedom to further copy and change it. Copyleft guarantees that every user has freedom."

Copyleft intends to perform the reversed action of copywight in respect to software freedom:

'Proprietary software developers use copyright to take away the users' freedom; we use copyright to guarantee their freedom. That's why we reverse the name, changing “copyright” into “copyleft.”'

There are three types of copyleft licenses, namely GPL, LGPL and :

"...the specific distribution terms that we use for most software are contained in the GNU General Public License ....An alternate form of copyleft, the GNU Lesser General Public License (LGPL) , applies to a few (but not all) GNU libraries....The GNU Free Documentation License (FDL) is a form of copyleft intended for use on a manual, textbook or other document to assure everyone the effective freedom to copy and redistribute it,..."

Through [7], Stallman strongly urges that any improved code of GPL-covered software has be been a free software to be contributed to the community, otherwise it is not released at all. Stallman addresses this point with GNU Objective C example:

"...NeXT initially wanted to make this front end proprietary; they proposed to release it as .o files, and let users link them with the rest of GCC, thinking this might be a way around the GPL's requirements. But our lawyer said that this would not evade the requirements, that it was not allowed. And so they made the Objective C front end free software."

[8] is about software patent and how it could become a obstacle to software development. The software patents are considered part of "Intellectual Property" [4] which is biased in Stallman's view. Stallman is strongly against lumping copyrights, patent, and trademark altogether to form intellectual property:

"None of them has anything in common with any of the others. Their origins historically are completely separate. The laws were designed independently. They covered different areas of life and activities. The public policy issues they raise are completely unrelated."

To be specific, Stallman differentiates copyrights and patents:

"Copyrights cover the details of expression of a work. Copyrights don't cover any ideas. Patents only cover ideas and the use of ideas. Copyrights happen automatically. Patents are issued by a patent office in response to an application."

There are 3 approaches Stallman has proposed to deal with patent:

1. Avoiding the patent
2. Licensing the patent
3. Overturning a patent in court

To summarize [8], Stallman is saying that the patent office is not professional enough at patenting. By patenting a idea which is too general in a specific field, this would block others to improve and advance that idea since people require to get license from the patent holder.


REFERENCE
[1] Free Software Definition (FSF)
[2] The Right to Read (FSF)
[3] Misinterpreting Copyright (FSF)
[4] Did You Say "Intellectual Property"? (FSF)
[5] Words to Avoid (FSF)
[6] What is Copyleft? (FSF)
[7] Copyleft: Pragmatic Idealism (FSF)
[8] The Danger of Software Patents (FSF)