Software and online tools Archives - Sky's Blog https://blog.red7.com/category/technology-and-geeky-stuff/software-and-online-tools/ Communicating in a networked world Tue, 08 Dec 2020 04:31:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://blog.red7.com/wp-content/uploads/2018/01/skyhi-wind-icon-256x256-120x120.png Software and online tools Archives - Sky's Blog https://blog.red7.com/category/technology-and-geeky-stuff/software-and-online-tools/ 32 32 Visualizing packet traffic https://blog.red7.com/visualizing-packet-traffic/ https://blog.red7.com/visualizing-packet-traffic/#respond Sun, 26 Apr 2020 23:25:23 +0000 https://blog.red7.com/?p=5354 Very techie here… For a few months I’ve been operating a packet radio station on a 2-meter radio frequency here in the San Francisco Bay Area. I explored what it would take to make this a full “BBS” (like an online “forum”), and then backed off and let it just hang around n this frequency listening […]

The post Visualizing packet traffic appeared first on Sky's Blog.

]]>

Very techie here… For a few months I’ve been operating a packet radio station on a 2-meter radio frequency here in the San Francisco Bay Area. I explored what it would take to make this a full “BBS” (like an online “forum”), and then backed off and let it just hang around n this frequency listening to the other (mostly BBS) stations. A few days ago, I got interested in graphing the data to better understand what stations were using the frequency and when.

Packet radio was very popular 20 to 30 years ago, and has mostly been displaced by other amateur radio digital technology and by the Internet. Yet, it’s still quite reliable and is a good way to pass messages from one place to another when Internet or voice communications are unavailable (i.e. in an emergency). I’ve always been interested in the presentation of data, and it was an interesting challenge to figure out how to chart the data in ways that support inquiry.The result of my experimentation is visible in a chart.

The chart is made by this process:

  • JNOS (the software that runs the packet radio station) logs all data it hears on the radio;
  • A Python script analyzes this log file, keeping track of what stations were heard in each hour;
  • The Python creates javascript data in a form acceptable to Google Charts;
  • The javascript is transferred to a web server;
  • PHP code reads the javascript and inserts it in an HTML page;
  • Google Charts javascript fashions the data into the interactive chart.

A “cron” job carries out this process once each hour to keep the chart data current. Because each data bucket spans a whole hour, there’s no need to update more than once an hour.

The post Visualizing packet traffic appeared first on Sky's Blog.

]]>
https://blog.red7.com/visualizing-packet-traffic/feed/ 0 5354
After Net Neutrality https://blog.red7.com/after-net-neutrality/ https://blog.red7.com/after-net-neutrality/#respond Fri, 05 Jan 2018 03:53:41 +0000 https://blog.red7.com/?p=4910 Under the principles of net neutrality, Internet Service Providers [ISPs] are like common carriers, carrying all bits equally, but with neutrality nullified, what’s the likely outcome? The Federal Communications Commission [FCC] in the United States has voted to nullify the common carrier status of ISPs, and thus to kill net neutrality, but of course other nations […]

The post After Net Neutrality appeared first on Sky's Blog.

]]>
Under the principles of net neutrality, Internet Service Providers [ISPs] are like common carriers, carrying all bits equally, but with neutrality nullified, what’s the likely outcome?

The Federal Communications Commission [FCC] in the United States has voted to nullify the common carrier status of ISPs, and thus to kill net neutrality, but of course other nations may not do so and I think there are customer actions that could make it difficult for carriers to run roughshod over this principle. The FCC calls their own action “Restoring Internet Freedom” and I, along with millions of others, contend that it’s only restoring the freedom for carriers to differentiate and prioritize, and charge as they see fit, making it more difficult for us common folks in the long run.

On the positive side, improved and more timely data service seems really attractive. People want it. Faster and stutter-free movies. Voice-over-IP calls without interruptions. Gaming and hugely-fast downloads. So there is actually some consumer pressure to prioritize.

Personally I think most of this is “entertainment motivated” in that the customers who care will be mostly the “consumers” — not businesses and not nonprofits. That’s because even if ISPs charge businesses more for these premium prioritized services, the big businesses will pony up and pay for it. Small businesses and individuals will be less able to do this, and that’s a big part of the problem.

So here’s how I think things will play out:

Advertising — The first thing that’ll happen, and it will be soon, though it’s not specifically limited by net neutrality, is that ISPs will look at your web usage and keep track of the sites you visit. They’ll make money by selling this data to third parties. Are you visiting Amazon.com a lot? You’re probably shopping. Are you visiting REI.com a lot? You’re shopping for outdoor gear. Visting Toyota.com a lot? Shopping for a new car. This kind of information is of great use and worth money to retailers, advertisers, car manufacturers. This kind of data is already commercially shared from web sites to advertising networks, but when ISPs can gather and sell this information, they’ll make money from it. And what’s more, ISPs can collect the data without your knowledge, and without leaving any evidence that they are doing so. Other web sites and advertisers do not have that advantage.

An ISP can also sniff the content of your (unencrypted) email, or your file downloads, which is something a web site cannot do. In other words, the ISP can create an open book full of information it can sell, because it is capable of monitoring every unencrypted communication you make through its connection. You may know that Google’s gmail can sniff your gmail traffic and will present advertising based on the contents of your mail — the ISPs would be able to do this regardless of where your email is held, if the connections are unencrypted.

The Let’s Encrypt project, which has ramped up mightily in the past year, aims to make it easier to protect traffic between you and the web sites you use, by making web site content unreadable by ISPs. The ISPs can still see which sites you use and how long you’re using each site, but when a web site is encrypted (HTTPS) the ISP can’t see which pages you’re viewing, nor what content you’ve viewed or submitted. (And you can also protect all of your network traffic from your ISP using a VPN, which I’ll discuss later.)

So here’s how I think this is all going to play out over a time period of one to three years (2018 to 2020):

The Inspection Scenario — To shape and prioritize your traffic, the ISP wants to understand (and prioritize) the type of data packets you’re sending. In theory and as far as the technology is concerned, all packets are just binary data, but in practice an ISP can look inside those packets (see deep packet inspection) and make conjectures about which ones are video, or audio, or gaming, or file transfers, and could treat them differently. Such as giving them higher or lower priority. Or charging more for some kinds of data. And because the carrier knows where your packets are going (meaning Disney, or YouTube or Netflix), it can differentiate and then prioritize based on financial agreements it may have (or interests) in those endpoints. So I predict that ISPs, who already have the capability to examine content, will be differentiating in some way based on your content as early as 2018.

Premium Services Plan — If the network manager has the capacity to examine your data, it could charge more for certain types of data — for the data that has more value to you. In other words, the carrier might “take a cut” of the economic value of the packets. This would be a lot like your phone company charging you more money to call a bank than to call a barbershop. Doesn’t happen to phone calls because the phone company (in the US) is a common carrier and regulated thus by the FCC. But that’s what Net Neutrality did for data carriers — and that’s now been rescinded by the FCC. I predict that ISPs will announce premium pricing for some types of content by 2019 — starting with voice-over-IP or video — and will promise to prioritize such types of traffic, for that price.

Transfer of costs to the supplier — Using a process we call zero-rating, an ISP may make certain types of content effectively free to its customers. They could make web access free, but inject advertising. They could make music “free” as T-Mobile has (meaning certain sites are free). Or throttle the delivery of (low-quality) video as Verizon has. Zero-rating has the effect of making other content more expensive, and of excluding content or providers based on criteria invisible to the customer. I predict that during 2018 more ISPs will first offer to accelerate certain content (such as video) for a price to the customer, then begin soliciting suppliers themselves to underwrite this, and eventually contend that this saves the end user from having to bear this cost.

Premium Sites Plan — The network manager could also charge customers more, or give more reliable or faster service, for traffic from specific providers. “Get your Disney movies faster and without glitches – $19.95 a month” is what I’d expect to hear within a few years. This would be done by prioritizing all traffic from Disney to you. Or any set of providers. Web sites. Email. And so forth. Any service the ISP thinks it can charge extra for, it will. I predict that by 2019 we will see Top-100 Premium Sites Plans from ISPs. Something that would have been illegal under the Obama-era FCC rules of net neutrality.

HTTPS (web) encryption — We’ve already reached the point where around half of web sites use HTTPS encryption to keep pages and submitted forms private. This will increase to 90% by 2020 and will frustrate the ISPs ability to look inside your interaction with these web sites.

Encrypted email — Here I’m pessimistic. People using standalone email, such as Apple Mail or Entourage, Outlook, Thunderbird apps on computers, have had encryption available for 20 years, though it hasn’t been easy to use until the last year or two. I predict email encryption will only slightly increase by 2020. However, more and more customers use outlook.com and gmail.com and services that use HTTPS encryption on their webmail interfaces, which renders email contents opaque to ISPs. This is a mitigating factor that will continue to improve the privacy of email, except that the email hosting company can, of course, still read your mail.

The Resistance — How could you prevent this kind of predatory behavior? Well even today, you could use a Virtual Private Network [VPN] to encrypt everything between your computer and the net. The encrypted packets are tunneled to another location (beyond your ISP), where they emerge onto the public Internet. For example, if you’re in San Francisco using “BigBad ISP” as your ISP, your computer might encrypt everything and send it to New York City, where it might emerge on a “GoodGuy ISP” network. BigBadISP would lose the ability to examine your data, and consequently could only charge you one rate for all traffic. That wouldn’t prohibit GoodGuy from doing something on its end, of course, but presumably you’d choose to emerge in friendly territory. I predict that by 2018 VPNs will be used by 20% of individuals and that ISPs will discourage their use by limiting VPN traffic. I predict that by 2019 ISPs will differentially charge more for VPN traffic from non-business customers or will require that customers upgrade to more expensive business or “Pro” plans in order to use a VPN. And I think that by 2020 ISPs will block VPN traffic from consumer accounts.

Higher Priced Privacy — And with VPNs blocked, ISPs will offer “Privacy services” for an additional price. In other words, if your ISP can’t see and make money off your traffic, they’ll charge you more to pay for the difference.

So the bottom line here is that businesses are in the business of making money by offering services. ISPs have offered connectivity for many years. That connectivity was priced initially based on bandwidth, then on data volume (particularly for mobile data), and now ISPs want to price their service on the value of the data. They’ll attempt to charge both their customers and the businesses who want to interact with their customers. They’ll offer “prioritized” services for an extra fee where there was no fee before. They’ll throttle services that don’t comply.

Because they can inspect customer behavior and data, they’ll profit by monetizing the value of the information about their own consumer customers. If that becomes difficult because of encryption, they’ll charge the customer an extra fee to protect his own data, under the guise that this is an improvement.

Net neutrality, and its interpretation under law, has largely protected consumers from this scenario for years. Now you have my predictions about how it could all unravel in just a few years.

The post After Net Neutrality appeared first on Sky's Blog.

]]>
https://blog.red7.com/after-net-neutrality/feed/ 0 4910
Net Neutrality — Introduction and overview https://blog.red7.com/net-neutrality-intro-overview/ https://blog.red7.com/net-neutrality-intro-overview/#respond Thu, 04 Jan 2018 02:16:33 +0000 https://blog.red7.com/?p=4880 I thought I’d write up some thoughts on underlying principles of the Internet — starting with Net Neutrality. Net Neutrality — Its core is that 1. all bits/packets on the Internet have equal priority; and 2. all endpoints on the Internet are interconnected and traffic is accepted and delivered without prejudice to and from each and […]

The post Net Neutrality — Introduction and overview appeared first on Sky's Blog.

]]>
I thought I’d write up some thoughts on underlying principles of the Internet — starting with Net Neutrality.

Net Neutrality — Its core is that 1. all bits/packets on the Internet have equal priority; and 2. all endpoints on the Internet are interconnected and traffic is accepted and delivered without prejudice to and from each and all of these endpoints.

The network operators (as data carriers) find better and better ways to carry traffic faster and cheaper (and perhaps more profitably overall), but to date it has been Internet pioneers, entrepreneurs, commerce, media, news and online services who have created new uses of this Internet platform, not the traffic carriers themselves.

The opponents of net neutrality want to eliminate the neutrality principles.

They tell us this is so the carriers can innovate and develop new services, and better manage their own networks. I’d say there’s some value in the management issue, but since the 1990s, carriers have been developing new capabilities, higher speeds, and the ability to handle more traffic even with net neutrality in place. What the elimination of net neutrality would allow them to do is charge based on type or origin of traffic — in other words, the carriers would presumably charge more for traffic that’s more valuable to the user, participating more directly in the profitability of every new service innovated by any entrepreneur. And also “calling the shots” on which services may have to pay the carriers more to prioritize, or even handle their type of traffic in the first place.

How do I know this? From conversations and news reports in the mid-1990s.

Net Neutrality has, so far, prohibited this kind of behavior and left the networks as essentially common carriers carrying all data without discrimination.

Legislation and the Internet — Legislation passed in the US, or China, or Iran or Brazil has localized effect for the most part. But legislation in the US, in the case of neutrality at least, will affect vast amounts of global Internet traffic, and the elimination of Net Neutrality in US law, followed by its elimination in practice by network managers, will have global effects.

Political Questions — This is not a “political” question. It is an economic question. Carriers would like to benefit more from the data they carry — currently they carry all traffic uniformly regardless of its content or economic value. Every bit costs the same as the next bit to carry, though some services use more bits. But financial data doesn’t cost any more to carry bit-for-bit than a Disney movie. Although Dems and GOP in Congress are coming down on pro- and con- sides of Net Neutrality, in real life it affects all of us equally. Seeing that Dems are more pro-neutrality, they are attempting to save neutrality which will benefit Republicans every bit as much. The political arguments are really based on taking sides for or against the large network operators, and for or against live citizens.

Why it’s Important — Neutrality permits netizens to build platforms (software, hardware) without regard for whether their traffic will be speeded, blocked or slowed by communication providers. That’s just it in a nutshell. It has been an essential part of net life for many years.

It also permits “anyone” to connect to the net. There are no special fees based on type of business or type of content. Instead they’re based on volume or speed. Fairly and equally. Some content may be blocked legally, but this is rather narrow in scope, and is determined in law, not by network carriers.

As a fundamental principle of the Internet, Net Neutrality is essential to openness and innovation.

The post Net Neutrality — Introduction and overview appeared first on Sky's Blog.

]]>
https://blog.red7.com/net-neutrality-intro-overview/feed/ 0 4880
Community Computing in the 1970s https://blog.red7.com/community-computing-1970s/ https://blog.red7.com/community-computing-1970s/#respond Tue, 26 Dec 2017 00:46:16 +0000 https://blog.red7.com/?p=4584 In the 1970s, as a part of my Computers And Teaching [CAT] project, I had a lot of conversations about how computers might transform learning, communication, and social interactions. I’ve already remarked on some predictions I made in 1973, including working from home, email, co-working spaces and online community access to information and learning. There […]

The post Community Computing in the 1970s appeared first on Sky's Blog.

]]>
In the 1970s, as a part of my Computers And Teaching [CAT] project, I had a lot of conversations about how computers might transform learning, communication, and social interactions.

I’ve already remarked on some predictions I made in 1973, including working from home, email, co-working spaces and online community access to information and learning. There were a lot of people working on these concepts in the 1970s. Many people had these and similar ideas, and much of the work presaged today’s online educational and social media. My personal focus was on communication in education, and my work involved using a supercomputer (and later a minicomputer) as a hub for education and distance-independent group communication.

Notable among those I interacted with

Community computing—People’s Computing Company (Bob Albrecht) in Menlo Park. Resource One (Lee Felsenstein) on Howard in San Francisco. Whole Earth Store (Rich Green) in Evanston (and Berkeley).

Computer conferencing—Murray Turoff (New Jersey Institute of Technology and formerly the Office of Emergency Preparedness). NSF project managers.

Networks—Doug Engelbart and team (Stanford Research Institute, SRI). I was at Doug’s lab he day they connected to the “Arpanet.”

(There’s a whole additional thread of people who worked in computer-based-education, which I’ll write up later.)

Resource One

[from PDF Online Computer Conference 1973 ]

This is Lee Felsenstein of Resource One speaking. This is our first attempt at using the ORACLE system (What did that OK mean?). We will be participating using our XD3-940 timesharing system. We hope to make the conference A) (IND of sub-conference here, since we will be able to accommodate several people building comment files on our editor program and shipping these comments off post-haste during our connect time. Likewise we will be able to accumulate files of comments from Evanston and will print these upon our high-speed printers so that participants here may read and absorb at less than 30CPS. We are inviting several people from alternative education circles. We also hope to stir up enough interest in local people so that they will be interested in starting a Bay Area learning exchange, hopefully using our machine and its information-retrieval system (ROGIRS). We have been operating a version of this system as a public-access database in a record store lobby in Berkeley for over a hundred days, letting just plain folks come up and use it like an electronic bulletin board. It works! People smile as they are told that it’s a computer at their service, we have accumulated about 700 items on the database so far (Items expire too, so there’ve been many more entered in toto).

You search for your item by telling the computer to find all items satisfying a particular combination of keywords which you specify. Keywords are determined solely by the person who enters an item and can be any string of characters. The terminal tells the user how many items have turned up satisfying a given keyword set. Example FIND RIDE EAST (Note: ‘and’ is implied by no connecting word between keywords);

13 ITEMS FOUND (This is the response from the machine). AND NEW YORK OR NY (this is the user narrowing these – actually a mistake has been made here, the machine will add to the list of items having keywords RIDE, EAST, NEW, YORK, the sum the items having keyword NY, anyhow enough detail). The user types ‘PRINTALL’ or ‘PRINT:’ if they want to seal off the found items or just the first one respectively. The user may add an item at any time.  There is no preset field structure or limited set of keywords the system can print. An alphabetized list of keywords currently in use at any time. This list is kept by the Berkeley terminal. We think that this system can be used as is for filing in a learning exchange. It is important to note that the system makes no judgements, but is simply a very talented file clerk that doesn’t keep you waiting. We are ready to offer terminals into system to local users who can participate in paying our costs. (We are nonprofit, the machine and a startup grant were donations, but operating money is not  assured.)

We will be refining the information retrieval system and hope to be able to move it off future (equipment costs $50,010 for system serving 64 simultaneous users and capable of storing several million items XXX whoops, that would be about 100-200,000 items at 200 average characters per item) and will be eager and able to manufacture such systems which require no daily maintenance. Why not have everything?

Our address is 1380 Howard St., San Francisco CA, 94103, and our phone is xxxxx. Off for now.

 Schuyler comments about online conferencing

…Perhaps you know that this conferencing program is a part of a computer-aided-instruction system, though it could be used in any general-purpose time-sharing system. The PLATO-IV system, with about 200 to 300 terminals now connected also has some conferencing programs like this — one (called TALKOMATIC) is for simultaneous participation (synchronous conferencing) and another (called DISCUSS) is for asynchronous conferencing (storing its comments as it goes). These make it possible for sites like Northwestern (180 miles from Urbana) to converse freely with people at other PLATO sites, without going through the hassle of a long-distance phone call. They are extremely useful! Thus, conferencing is already an important part of the largest C.A.I. system built to date!

Karl Zinn – CRLT Ann Arbor, Michigan

I like the idea of on line conferencing, or in general, teleconferencing. Potentially it brings people together at less expense, and leaves a trace of interaction, and the interim storage of messages and comments can aid interaction when two personal schedules do not match. I hope such conference activity also will bring about more thoughtful statement of ideas and more careful criticism. However, the computer programs should do much to aid in this. For example, within this conference file, or another file could I list an agenda or set of issues (without listing all proceeding entries)? Can I list all current or previous participants? Can I search previous entries by participant, keyword or content (as well as date)? Can 2 or more participants work on a common statement and so on. Perhaps much can be learned from experiences with the PLATO system and with Engelbart’s system at SRI…

The post Community Computing in the 1970s appeared first on Sky's Blog.

]]>
https://blog.red7.com/community-computing-1970s/feed/ 0 4584
Max/MSP project https://blog.red7.com/maxmsp-project/ https://blog.red7.com/maxmsp-project/#respond Sun, 20 Dec 2015 18:07:48 +0000 http://blog.red7.com/?p=3903 “Max/MSP” is a computer app that implements an on-screen visual programming environment in which you can “wire” together components that make and process sound or logic. You could think of it as programming, but it’s unlike the old procedural programming you probably just thought of. It’s object-oriented, but more than that it includes many components that are paced by a clock. And the […]

The post Max/MSP project appeared first on Sky's Blog.

]]>
max-msp-projectMax/MSP” is a computer app that implements an on-screen visual programming environment in which you can “wire” together components that make and process sound or logic. You could think of it as programming, but it’s unlike the old procedural programming you probably just thought of. It’s object-oriented, but more than that it includes many components that are paced by a clock. And the “programming” is carried out visually by creating and moving objects on the screen and patching (you might say “wiring”) them together. You might have a metronome beating four times a second, for example, and it could trigger sounds or actions that it’s wired to.

I just completed a one-semester course in Max/MSP and our final project challenged us to build a Max/MSP program (called a patcher) that implements an interesting musical performance.

The “music” in the performance is composed from a set of audio recordings — you might even call them samples — and because any kind of audio sample could be dropped in, there are billions of ways this music might be made and might sound. Because it can all be controlled from the stage in real time, we could also put together a performance using not only the Max/MSP components but live performers.

Video clips for you to play:

Description of this project (5 minute video) »

 

The post Max/MSP project appeared first on Sky's Blog.

]]>
https://blog.red7.com/maxmsp-project/feed/ 0 3903
My First Computer https://blog.red7.com/my-first-computer/ https://blog.red7.com/my-first-computer/#respond Tue, 01 Dec 2015 07:17:17 +0000 http://blog.red7.com/?p=3898 Well I don’t have a photo, but my first computer was an IBM 709. My next computer, for a very short time, was a CDC 3400, which was soon after replaced by the CDC 6400 that served  for roughly 7 years as “my” mainframe. Me and many other researchers, of course.   Because of my job, and my […]

The post My First Computer appeared first on Sky's Blog.

]]>
Well I don’t have a photo, but my first computer was an IBM 709. My next computer, for a very short time, was a CDC 3400, which was soon after replaced by the CDC 6400 that served  for roughly 7 years as “my” mainframe. Me and many other researchers, of course.

CDC6400

 

Because of my job, and my grad school research, I had privileged access to this computer, and pretty much “run of the farm” after midnight many nights and on weekends, along with the crew who programmed “Chess 1.0” and other delicious software at Northwestern University. Our sponsor, Ben Mittman, was Director of the computer center. Once we had  dial-up (“modem” look it up!) computer terminal access, my nights were spent more via remote access, but this computer still has a special meaning for me.

The post My First Computer appeared first on Sky's Blog.

]]>
https://blog.red7.com/my-first-computer/feed/ 0 3898
Testing a new FB connector https://blog.red7.com/testing-new-fb-connector/ https://blog.red7.com/testing-new-fb-connector/#respond Wed, 29 Jan 2014 21:12:49 +0000 http://blog.red7.com/?p=3669 My old WordPress plugin to publish to Facebook has failed. Now I’m trying the “built in” connector provided by WordPress.org. It connects through WordPress.com, and you have to have an account there, which is where you specify whether to post just to your timeline or to your pages as well. The plug-ins I was using, […]

The post Testing a new FB connector appeared first on Sky's Blog.

]]>
NASA Endeavour space shuttle over San Francisco 2012
Speaking of plug-ins that no longer work — the NASA Endeavour space shuttle took its last trip over San Francisco in 2012. (Click the photo for more…)

My old WordPress plugin to publish to Facebook has failed.

Now I’m trying the “built in” connector provided by WordPress.org. It connects through WordPress.com, and you have to have an account there, which is where you specify whether to post just to your timeline or to your pages as well.

The plug-ins I was using, which shall remain unnamed, have not been updated for a long time. When I got a custom Facebook URL some time ago, they stopped posting to my timeline. I believe the issue was that FB gave me a new “ID” and the plug-in was mistakenly getting the old ID, which FB no longer responded to. I hope this new arrangement through WordPress.com is going to work.

The post Testing a new FB connector appeared first on Sky's Blog.

]]>
https://blog.red7.com/testing-new-fb-connector/feed/ 0 3669
Do what I want, not what I (don’t) say https://blog.red7.com/do-what-i-want/ https://blog.red7.com/do-what-i-want/#respond Fri, 25 Jan 2013 08:03:35 +0000 http://blog.red7.com/?p=3561 I have lots of clients who have great ideas, wonderful vision, and yet have a lot of trouble understanding why I keep asking them for more and more specificity before I sit down and write some HTML or code. I’m afraid they sometimes think I’m a dolt because I keep asking for more detail about […]

The post Do what I want, not what I (don’t) say appeared first on Sky's Blog.

]]>
keyboardI have lots of clients who have great ideas, wonderful vision, and yet have a lot of trouble understanding why I keep asking them for more and more specificity before I sit down and write some HTML or code. I’m afraid they sometimes think I’m a dolt because I keep asking for more detail about exactly what they want me to do. They find it hard to understand why I can’t just take an idea and run with it. Why do I need a detailed specification?

I ran into this passage a week ago, written over 10 years ago (but timeless), and the clarity and insight was so right on that I laughed out loud:

“The programmer, who needs clarity, who must talk all day to a machine that demands declarations, hunkers down into a low-grade annoyance. It is here that the stereotype of the programmer, sitting in a dim room, growling from behind Coke cans, has its origins. The disorder of the desk, the floor; the yellow Post-it notes everywhere; the whiteboards covered with scrawl: all this is the outward manifestation of the messiness of human thought. The messiness cannot go into the program; it piles up around the programmer.

Ullman, Ellen (2012-02-28). Close to the Machine: Technophilia and Its Discontents (Kindle Locations 352-356). Picador. Kindle Edition.

So when the client says, “Make that headline a little more greenish,” I now have something I can point them at so they’ll understand the difficulty of that seemingly simple task. I love it!

The post Do what I want, not what I (don’t) say appeared first on Sky's Blog.

]]>
https://blog.red7.com/do-what-i-want/feed/ 0 3561
Nginx may not improve your performance compared to Apache https://blog.red7.com/when-nginx-doesnt-help/ https://blog.red7.com/when-nginx-doesnt-help/#respond Sat, 04 Aug 2012 18:11:27 +0000 http://blog.red7.com/?p=3510 The predominant “web server software” used for WordPress sites are Apache and nginx. [1. tech discussion: Apache launches a new thread (a “program”) in server memory for every incoming page and object requested by your site visitors. This can rapidly clog the server’s memory as the number of requests per second increases. nginx initially launches […]

The post Nginx may not improve your performance compared to Apache appeared first on Sky's Blog.

]]>
20120804-150517.jpgThe predominant “web server software” used for WordPress sites are Apache and nginx. [1. tech discussion: Apache launches a new thread (a “program”) in server memory for every incoming page and object requested by your site visitors. This can rapidly clog the server’s memory as the number of requests per second increases. nginx initially launches a number of threads and then dispatches page/object requests to them for service—properly configured it doesn’t bloat up and fill memory.] Generally on smaller servers nginx will be more efficient because it doesn’t gobble memory like Apache does. The question of which web server software to use hinges primarily on the CPU power and memory resources that are required on the server side to make your site run properly.

If your web site requires a lot of CPU time to generate pages, then nginx may not hold any advantage for you. [2. I define “A lot” as more than a second.]. You can test to see what your page-generation time is using webpagetest.org — and what you want to look at is the bar that shows how much time elapsed between the browser’s request and the delivery of the first byte of the page. The time it takes to serve that first HTML file is pretty much composed of CPU and MySQL (database) time. If the time between the HTTP request for the page and the first byte served is long (a couple of seconds) then your site is probably too CPU-intensive and nginx may not help you out very much.

That said, more and more people are moving toward specialized WP-hosting, where they don’t have to worry about what web server is used at all. And within a few years this may be a moot point, as nobody may be self-hosting their own WP any more!

 

Footnotes:

The post Nginx may not improve your performance compared to Apache appeared first on Sky's Blog.

]]>
https://blog.red7.com/when-nginx-doesnt-help/feed/ 0 3510
Top sysadmin tools for iPad https://blog.red7.com/ipad-sysadmin-tools/ https://blog.red7.com/ipad-sysadmin-tools/#comments Sat, 16 Apr 2011 04:21:33 +0000 http://blog.red7.com/?p=3228 Digital nomads, you can finally and really be the system administrator for your cloud (and other) servers from your iPad. Since December, each time I’ve left town, I have intentionally left my MacBook Pro at home in favor of my iPad. I found that just having a few specific apps allowed me to fully administer […]

The post Top sysadmin tools for iPad appeared first on Sky's Blog.

]]>
Digital nomads, you can finally and really be the system administrator for your cloud (and other) servers from your iPad. Since December, each time I’ve left town, I have intentionally left my MacBook Pro at home in favor of my iPad. I found that just having a few specific apps allowed me to fully administer my cloud servers from the pad. Please note that a bluetooth (or other) keyboard is required for some of these apps to function fully. But generally I can do everything I need to when I’m on the road.

MY TOP APP PICKS FOR SYSTEM ADMINISTRATION ON iPAD

  • iSSH— gives you secure shell (SSH) access to your servers using name+password or digital certs. If you use a command-line editor on your server (I use vi), be aware that up-down-right-left arrows won’t really function if you use the onscreen keyboard, but from a bluetooth keyboard they do work! Recently I’ve also had trouble with ESC, and I’ve had to tap its onscreen “button” instead on the physical key. You can also configure iSSH to emit true function keys (which are needed for some configuration work—in htop, for instance).
  • 1Password— what a great way to keep all those passwords in one place! And encrypted too. 1Password for iPad syncs with 1Password on my Mac through Dropbox. When I make a new password, or change one, it is always available on the iPad as soon as I need it. This way I can use those 20-character random passwords that I’d never remember if I had to commit them to memory.
  • Dropbox— Well of course you already know I use Dropbox for sync’ing 1Password across devices. And you can do without it if you sync the two devices “locally” on wi-fi, but I would never remember to do it—Dropbox lets it happen more in real-time and effortlessly.
  • DropDAV— (Not an iPad app, but essential nevertheless) I need DropDAV because I have a buddy who watches my back and serves as sysadmin when I’m on those long air flights or otherwise indisposed, and he and I need to share documents, which we do through DropBox. DropDAV isn’t an app, it’s a service. Sign up and it makes your DropBox documents available to Pages and Keynote through WebDAV services on DropDAV.
  • WordPress app— HTML textboxes don’t scroll properly on Safari on the iPad. This is a really big problem if you’re trying to admin a WordPress blog in Safari. So the WordPress iPad app is a necessity, though you don’t really have access to all of the WP admin features (it’s designed for bloggers, not admins), which means I’m constantly back and forth between this app and Safari even when I’m working on a single blog. This needs improvement, but I can make it work well enough for now.

PROBLEMS WITH THE iPAD

  • No Flash. This means I can’t fully utilize a lot of tools, like Cloudkick, when on the road because they use Flash extensively. (However, I can log in at CloudKick even with my Yubikey one-time-password USB device, as long as I have the iPad USB camera adapter with me. That’s a trick to be explained elsewhere.)
  • There’s no PGP mail encryption/decryption for the iPad mail app. Although I have other ways of dealing with encrypted mail when I’m on the road, this is still a big problem. If you rely on encrypted mail, be sure you have an alternative available when you’re traveling with your pad.

 

The post Top sysadmin tools for iPad appeared first on Sky's Blog.

]]>
https://blog.red7.com/ipad-sysadmin-tools/feed/ 2 3228
“Eyeballs-on-site” yielding to “eyeballs-on-content” https://blog.red7.com/eyeballs-on-content/ https://blog.red7.com/eyeballs-on-content/#respond Thu, 14 Oct 2010 19:30:31 +0000 http://blog.red7.com/?p=3133 When the web was new, the goal was to get as many “eyeballs” as possible looking at your site content—to aggregate readership with your site being the aggregation point. This pretty much followed the old rules of advertising and promotion—you needed people to see your advertising in order to succeed financially[1. Oh, wait, what do […]

The post “Eyeballs-on-site” yielding to “eyeballs-on-content” appeared first on Sky's Blog.

]]>
When the web was new, the goal was to get as many “eyeballs” as possible looking at your site content—to aggregate readership with your site being the aggregation point. This pretty much followed the old rules of advertising and promotion—you needed people to see your advertising in order to succeed financially[1. Oh, wait, what do I mean “old rules” here? It’s still true, and that’s why the rest of this article is germane.]. The phrases “visit us often” or “bookmark this site” or “come back frequently” were the conventional wisdom, and web surfers used bookmarks  to remember what sites they wanted to go back to and read later. But they mostly never did except for the big news or entertainment portals.

RSS feeds and news readers began to change that. (Thanks Dave[2. Dave Winer].) I use NetNewsWire’s standalone software on my Mac, and online services like Google Reader let you integrate feeds into your iGoogle home page. You can also sync your Google Reader settings across multiple programs and devices. But in the last couple of months, the scene is greatly changing is subtle ways I think people haven’t spotted yet…

With the advent of larger-screen mobile devices (like iPad) and reasonable mobile apps like reeder that sync with Google Reader lists, we’re now reading our news feeds everywhere, and the pace at which we flip through them has greatly accelerated.

We all know the Facebook stickiness phenomenon — you open facebook.com and just keep it open all day and night—news is there, feed is there, friends are there, chat is there, and everything is available on that one site. Same could be said, for some people, for iGoogle or gmail, which are all squished together in one big Google mashup of a “site.” Facebook and Google “have all the eyeballs” and now you don’t stand a chance of picking up very many eyeballs for your own web site. If you put a short URL into your Facebook status/feed pointing to a blog post you just wrote, any of your friends who click to open it will read your blog post on the Facebook page—they never need to leave Facebook, even to read your blog entry.

And since the arrival of  iPad apps like Flipboard [3. I love Flipboard, and I can zoom through dozens of pages on Flipboard by flicking a finger way faster than I can use a mouse and a computer screen, and this really hammers the web server that’s at the target end of the action! Flipboard picks up its clues and links solely from FB and Twitter feeds—you can’t even tell it to track a web site—it only tracks sites and pages that get significant social activity!] pick up FB status and Twitter streams, these apps are besieging web sites with requests for content, and then caching that content on their own sites for later reading. Yeah, you’ve usually got an option to read the full content on the original site, but it’s way at the bottom of whatever you’re reading, and since attention spans are short, you’re only going to read the original once in a while. Flipboard and its cousins are a reason why web site server performance occasionally suffers these days[4. Server performance suffers when searchbots spider a site more rapidly than the server can handle. I discovered Flipboard and other crawlers were impacting small server performance about a month ago, then I got an iPad and discovered why they’re crawling the sites, and I’m impressed with the net result, so I’m finding ways to improve server performance to handle the extra load, thinking that it’ll be worth it in the long run, and that this phenomenon is going to increase.]—they’re positively crawling all over small web sites proactively finding and caching content for their readers to look at later on.

The bottom line is that you have to make all impressions count, regardless of whether they’re on your own web site or on Facebook, Flipboard (or anywhere else). You can no longer count on eyeballs coming to your web site. You brand is wherever the readers’ eyeballs are, and no longer exclusively on your own web site. You’re not in control of this, and you’d better learn to take advantage of the situation and live with it.
Sky

 


 

The post “Eyeballs-on-site” yielding to “eyeballs-on-content” appeared first on Sky's Blog.

]]>
https://blog.red7.com/eyeballs-on-content/feed/ 0 3133
Are hungry searchbots eating your small web site alive? https://blog.red7.com/hungry-searchbots/ https://blog.red7.com/hungry-searchbots/#respond Mon, 11 Oct 2010 23:34:49 +0000 http://blog.red7.com/?p=3120 I work with a dozen or so clients at any given time, and in the last three (or thereabouts) weeks I’ve noticed that some sites on small servers with limited capacity are being “eaten alive” by spidery searchbots. And not just the usual suspects—Google, Yahoo, MSN—but by specialized searchbots that exhibit a kind of behavior […]

The post Are hungry searchbots eating your small web site alive? appeared first on Sky's Blog.

]]>
I work with a dozen or so clients at any given time, and in the last three (or thereabouts) weeks I’ve noticed that some sites on small servers with limited capacity are being “eaten alive” by spidery searchbots. And not just the usual suspects—Google, Yahoo, MSN—but by specialized searchbots that exhibit a kind of behavior I haven’t seen very much before. It used to be that web site owners prayed for the searchbots to come by, and searchbots by and large were sparing in their examination of pages, not hitting a site very hard at all, but building an overall image of the pages on the site over a long time. [1. Illustration: “Spider & Crossbones” pirate flag]

But times are changing rapidly! Even a site with very little human traffic may be suddenly and catastrophically overwhelmed by searchbot traffic.

Sites on small servers frequently are configured in such a way that they can serve perhaps a dozen or two simultaneous visitors[2. Web servers have limited RAM memory, and because of the way popular web server software, like Apache, is usually configured, once the RAM memory is full, they either slow down or stop serving visitors entirely. The condition is sometimes called “wedging” since it’s like trying to drive a wedge into a crack in a log.]. Searchbots (the robotic spiders that crawl the web on behalf of search engines) don’t use a web site the same way humans do. A human site visitor downloads a page, a bunch of photos, some style sheets, and then sits there a few seconds (at least) reading or looking at the page before clicking for more. Web servers like those that I maintain for my clients, are configured so that they can handle this kind of “human paced” load, and we have lots of tricks [3. Like offloading the photos to content management systems.] so human visitors can be served really fast. WordPress sites, for example, require considerable CPU time to create a dynamic page that’s composed of data, photos, plugins and other widgets. So we have the server cache the finished pages, and serve those cached copies rapidly, rather than spending a lot of server CPU time regenerating them for every visitor. A cached page might require only a tiny fraction of a second to serve, compared to the seconds it takes to build the page in the first place.

But searchbots frequently look only at the core page, and not at the photos[3. There are specialized searchbots now that look only at photos or videos.], and then quickly move to the next page they want to investigate. Sometimes a searchbot will request 5 to 10 pages in a single second—human visitors usually are paced at a page every few seconds. When a searchbot explores like this it can rapidly max out a small server. What’s more, human visitors tend to clump or cluster and look at similar things—while searchbots may request pages all over the place completely unconnected to each other. The human visitors, because they’re interested in similar topics, will end up hitting cached pages, while the searchbot, making 30x the normal number of requests per second [4. Say 10 pages per second rather than 1 every 3 seconds for a human], hits pages all over the site, unrelated to each other.

The worst of the “bad behavior” however, arises from certain bots (I’ll name them in another article later on) that “anticipate” what their masters might want to see and do a “look-ahead” instead of picking up a single page, they might pick up a page and 5 to 10 related pages, regardless of whether their master wanted those pages. You can think of them building a repository of pages, stemming from the top or home page, that a visitor might want to see, just on the off chance that a visitor will come along wanting that specific page.

Although the spiders are usually there for legitimate purposes, related to fancy and sometimes useful new online services, this kind of spidering can really drag down a server!


The post Are hungry searchbots eating your small web site alive? appeared first on Sky's Blog.

]]>
https://blog.red7.com/hungry-searchbots/feed/ 0 3120
HTML5 and geo-location https://blog.red7.com/html5-geolocation/ https://blog.red7.com/html5-geolocation/#respond Mon, 19 Jul 2010 16:01:04 +0000 http://blog.red7.com/?p=2944 I was reading an InfoWorld article on the benefits and features of HTML version 5, which isn’t a formal standard yet, but many elements of which are already incorporated into browsers. Media: A major benefit for all of us will be that embedding media (videos particularly) will become standardized and greatly simplified, so the web […]

The post HTML5 and geo-location appeared first on Sky's Blog.

]]>
I was reading an InfoWorld article on the benefits and features of HTML version 5, which isn’t a formal standard yet, but many elements of which are already incorporated into browsers.

Media: A major benefit for all of us will be that embedding media (videos particularly) will become standardized and greatly simplified, so the web developer won’t have to worry so much about plug-ins, players and compatibility.

Geo-location: But more fun perhaps than that, there is a geo-location feature built into HTML5, and it’s available today on some browsers (Chrome, Safari, Firefox). In this article Dive into HTML5 — You are here (and so is everybody else), there’s a cookbook for creating a web page that locates you and displays a Google map centered on your coordinates. My page will figure out where you are located and display the Google map — but only if you have an HTML5-compliant browser, sorry. Mobile browsers are particularly good for this because they know your location quite precisely.

I took an hour this morning to build the page, and subject to some debugging (and figuring out that the whole process is asynchronous), I had it working. Clearly if you’re at a wired location, Google is using your IP address and maybe some routing information to locate “approximately” where you are, but on my iPhone it gets much closer to the real location. I used the “You are here…” article, plus some advice from Google code.

And the interface asks you whether to reveal your location before it goes ahead and gives it to the web page to work with. Nice!

That bit about it being asynchronous is important. Anyone used to writing plain-vanilla javascript code knows that usually javascript statements are executed one after another, right down the page (as it were), and you’d think that making a function call to get the current location would actually complete the task and then return control when it finished, to execute the rest of the javascript statements. But this particular interface simply triggers the process of getting the location, and then when it has completed, it makes a callback to a javascript function where you can complete the rest of the work of putting the map up on the page (or any other thing you want to do with the location information).

This kind of asynchronous execution of statements and functions, with callback functions being given control later on when some action is completed, is common in most programming languages, but many javascript coders don’t use it very much. This is one case where you have to pay careful attention and plan ahead.

To get a better idea of how it works, look at the page I wrote and then view source to see how the javascript is written.

Now the InfoWorld article also mentions that HTML5 might not be a fully-adopted standard until 2022, which means that everyone will have blown by it long ago by then and we’ll have a hodgepodge of implementations none of which will completely match the eventual standard. Ahem! Things have to work faster than that in the online world!

The post HTML5 and geo-location appeared first on Sky's Blog.

]]>
https://blog.red7.com/html5-geolocation/feed/ 0 2944
Joi Ito on Innovation and Startups https://blog.red7.com/joi-ito-on-innovation-and-startups/ https://blog.red7.com/joi-ito-on-innovation-and-startups/#respond Mon, 28 Jun 2010 21:57:42 +0000 http://blog.red7.com/?p=2871 I love Joi Ito’s advice about startups. Mostly he is talking about understanding risk. I particularly focused on one section just after 9 minutes into the video where he talks about how it’s folly to spend a lot of time building a business plan when it’s so inexpensive to go ahead and develop your product […]

The post Joi Ito on Innovation and Startups appeared first on Sky's Blog.

]]>
I love Joi Ito’s advice about startups. Mostly he is talking about understanding risk. I particularly focused on one section just after 9 minutes into the video where he talks about how it’s folly to spend a lot of time building a business plan when it’s so inexpensive to go ahead and develop your product iteratively and develop the plan after you’ve seen how your customers are reacting to the product. Here’s the video:

[vimeo 6827318]

* Understand risk. Buy low, sell high. Manage your risk.

* Spend your time (as in investor) on the companies that are doing well, don’t just “nickel and dime” the ones that are failing.

* The cost of failure is decreasing. If you start from open source, and have a designer, an engineer, a products guy, users, and you get growing 30% a month or so, you don’t even need to write a business plan. [just after 9:00 minutes into the video – THIS IS THE KEY point I want to make]. If you can get your project to the point where it is running, growing, perhaps bringing in some money, you bring in the VC investors at that point – no earlier!

* Open standards give you a big advantage. Big companies spend $ millions to even think about a new project, but you can get your project off the ground for far less by starting with open source, good ideas and good thinking.

* Development methodology needs to be flexible, iterative, and respond to what you can learn from your customers. ”If you’ve launched your product and you’re not embarrassed by it, you‘ve launched too late.”

* Distribution. Every failed startup has had a business model, team, and so forth, but no users. Almost every team that has users eventually comes up with a business model [if they’re smart and paying attention]. You must be viral – you must infect your customers.

The post Joi Ito on Innovation and Startups appeared first on Sky's Blog.

]]>
https://blog.red7.com/joi-ito-on-innovation-and-startups/feed/ 0 2871
No chance for true security? https://blog.red7.com/no-chance-for-true-security/ https://blog.red7.com/no-chance-for-true-security/#respond Thu, 28 Jan 2010 17:15:13 +0000 http://blog.red7.com/?p=2615 Is security dead on the Internet? Yeah, it probably is—as long as we rely on software other people have written[1]. Unless you’re capable of writing all of your own software, without any errors, and keeping it isolated from software written by anyone else, you’re never going to have a secure digital life[2]. But there are […]

The post No chance for true security? appeared first on Sky's Blog.

]]>
Is security dead on the Internet? Yeah, it probably is—as long as we rely on software other people have written[1]. Unless you’re capable of writing all of your own software, without any errors, and keeping it isolated from software written by anyone else, you’re never going to have a secure digital life[2].

But there are things you can do to protect yourself. NGO-in-a-box has developed Security-in-a-box, a set of tools and tactics for your digital security. Worth taking a look!

It’s often said that “if we can envision it, we can create it,” but in the world of computer (and network) software this is only partially true. We can attempt to create it, but it will always have bugs in it. And those bugs are the chinks in the armor that allow malware to work and cyberwarfare to succeed.


[1] That’s because I can write a perfect program with no bugs, but nobody else can.

[2] See also The Social Graph of Malware, my site where I explore ways in which social engineering is used by the bad guys to get malware onto your computer.

The post No chance for true security? appeared first on Sky's Blog.

]]>
https://blog.red7.com/no-chance-for-true-security/feed/ 0 2615