Skip to content

Blog

Talk Python Training: Consuming HTTP Services in Python Review

Summary / tl;dr: Consuming HTTP Services in Python is a great addition to the training courses from Talk Python and Michael Kennedy. You’ll come away with a thorough knowledge of the best way to get data from the internet using the requests module; you’ll use real world examples and APIs from Basecamp, Github and a custom API Michael built just from the course; Michael will explain and show the concepts in an easy to learn manner with a little humor and recap each concept to make sure you understand.

In addition to being host of the well known Talk Python podcast, Michael Kennedy has also created a number of Python training courses. The first, Python Jumpstart by Building 10 Apps, launched its Kickstarter exactly a year ago this month, and was quickly followed later in the year with Python for Entrepreneurs on Kickstarter and Write Pythonic Code Like a Seasoned Developer.

I started and finished Python Jumpstart by Building 10 Apps late last year and loved it. It was a very different learning experience than the University of Michigan’s Python for Everybody class on Coursera. There is an assumption with the Talk Python training courses that you have some basic understanding of computer science or programming. I don’t, so I typically go a little slower and take my time with the courses.

Looking back. there are a few things I liked about the Jumpstart by Building 10 Apps course and I was glad to see continue in this latest course:

  • Michael makes it very easy to follow along in the beginning of the courses. Everyone learns differently, but one of the ways I learn best is to follow along by typing the code as he does in the video, helping me commit it to memory.
  • After teaching you a core concept and coding it into one of the apps, Michael recaps what you’ve just learned in its own “Concept” video. This summarizes the concept you just put into practice and reinforces what you’ve learned.
  • Compared to some of the other online courses I’ve taken, I really like that I know I’m learning from someone well known in the community and I believe I’m not just learning how to code, but coding best practices. I don’t know if I’m explaining this right, but as an example: A few of the online classes I’ve taken haven’t had me put the code into functions and then call them in a main(): function, for example.
  • The source code to the examples Michael teaches you is on Github. You can download it, star it, fork it – but it’s available if you want to follow along, code along as the course goes, or just save it for reference for the future.

I’ve shared my enthusiasm for the Talk Python training courses here and on Twitter and when Michael reached out to me last week asking if I was interested in having a sneak peek at his latest course, Consuming HTTP Services in Python, I jumped at it (after making sure he knew I was still a novice early in my Python learning curve). I took a look at the course overview and this is right in my wheelhouse of what I need to learn. A core part of the app I want to build is exactly what this course is about – using the requests module to download at least a half dozen JSON feeds and then building my app around that. (My app is to build the scoring for a custom NFL Pool league – it’s not a fantasy league, it’s different. All of the data comes from MySportsFeeds, who provides sports data via JSON or XML which I will consume, store in a database, and then write a Python program to calculate the league and player scores to be displayed on the league website.)

What I really liked about this course was that it was focused on one thing: consuming services. I’ve taken a few different Python courses online as I try and learn Python, and most are throwing all the basics that you need to know – everything you’d expect in a beginner course, but it does get overwhelming. This was the first course I’ve taken that was focused on getting you really good at one thing, and in a few different ways that you might need to do it.

Immediately, I learn something new. I only knew of requests from I learned using Google and Stack Overflow. When I started playing around and putting together the building blocks of my app, I wrote the following code. MySportsFeed currently using HTTP Basic Authentication, so I have a separate file called secret.py that stores my username and password – I may be new to Python, but I’m smart enough to have created that, import it and add it to my .gitignore file!

This code polls the Playoff Team Standings feed on MySportsFeeds and then I have some (ugly) Python code that runs a for loop to rank each of the two NFL Conferences teams from 1 to 16.

response = requests.get(
    'https://www.mysportsfeeds.com/api/feed/pull/nfl/2016-2017-regular/playoff_team_standings.json?teamstats',
    auth=HTTPBasicAuth(secret.msf_username, secret.msf_pw))

rawdata = response.content
data = json.loads(rawdata.decode())

And what did I learn? As I tweeted last week:

Now my code looks like this:

response = requests.get(
    'https://www.mysportsfeeds.com/api/feed/pull/nfl/2016-2017-regular/playoff_team_standings.json?teamstats',
    auth=HTTPBasicAuth(secret.msf_username, secret.msf_pw))

data = response.json()

It’s not a lot, it’s just one line of code, but it’s these little things. I had no idea the power of requests – this is just one specific example of something I learned from this course. Another thing I learned? I should be taking the URL in the above eample, create a base_url variable and then append the feed name as another variable. This is covered in a later chapter of the course – Consuming RESTful HTTP services. This chapter has a ton of great examples I’m going to be referencing when writing my app and using.

The Consuming RESTful HTTP services chapter is where the course really starts to take off. I ran into this with the Jumpstart course as well – Michael does a great job in teaching you the building blocks and then the course seems to go from 0-60. This is where having previous programming experience is helpful as that jump from learning what each puzzle piece does to how you put the puzzle together clicks. For someone like me, without any programming experience, it’s a big jump, but possible.

With that said, this chapter is fantastic. While I had a cursory knowledge of HTTP commands like GET and PUT, the API Michael built for the course is awesome. You have the opportunity to create your own examples and interact with the API and blog explorer app – this isn’t something you see with most online courses out there.

I also learned that I only want to use requests, and not built-ins. Though I do now have an understanding of the urlib built-in for Python 3.x if I’m ever cornered and have to use it.

I will admit to skipping the chapter on SOAP. I’m a hobbyist, not an enterprise developer who may encounter SOAP. But it’s great this available for those who may need it as part of this course. This, combined with learning how to use JSON, XML, and screen scraping makes it a complete course.

The last chapter is on screen scraping. There are a ton of of tutorials and classes available on the web about screen scraping. I’ve taken a few of them – one of the challenges I have with my app is figuring out the playoff seeding and I thought about scraping NFL.com, but that’s a different story. This chapter kicks off with an example of using a site’s sitemap.xml – an example I’ve never seen before that makes so much sense once you learn about it. And if a website you want to scrape doesn’t have a sitemap.xml, shame on them for not being search engine friendly. But if they don’t, Michael goes through other ways to scrape a website using Beautiful Soup and does it in the most Pythonic way I’ve seen yet in a course.

I enjoyed Consuming HTTP Services in Python. With the requests module and JSON being a cornerstone of the app I hope to write, it was great to learn about everything I need to know to make that happen. Michael’s delivery is conversational and he makes it easy to follow along and do the code examples with him, if you choose to. If you have programming experience or are coming from a different language, the videos themselves will probably teach you what you need to do in Python. If you’re like me, a complete novice to Python, you’ll be able to follow along, but be prepared for the jump the course will make in the Consuming RESTful HTTP Services chapter – this moves pretty quickly, but if you’ve forked the Github repo you’ll have access to the program Michael has written and you can (and should) write your own examples to interact with the API on the blog explorer. For $39, you’re getting a well developed course from someone well known in the Python community teaching you the Pythonic way interact with services. While other online training sites might have “sales” that are cheaper, as someone new to Python who has taken some of those courses, trust me – the Talk Python courses are well worth the money.

I’m still early in my Python journey and the two courses I’ve finished from Talk Python have been the best learning resources I’ve used out of all the books and training I’ve purchased (and it’s a lot). I’m still working my way through Python for Entrepreneurs and am really looking forward to two of the upcoming courses using SQLAlchemy as this database stuff is way over my head right now. Thanks again to Michael for allowing me to have a preview of the Consuming HTTP Services course – now it’s time for me to take his advice from the last chapter of the course and write some code – the best way to actually learn.

The macOS apps I’ll miss the most

I have been considering switching back to GNOME full-time and finally pulled the trigger last week and did, installing Fedora 25 on both my iMac and MacBook Pro. I installed GNOME on my iMac a couple months ago, but didn’t do the installation correctly and screwed up my MBR, resulting in only GNOME being an option. I’ve fixed that this time and have kept dual boot (for just in case and for iTunes on my iPhone and iPad).

The more I’ve thought about this over the last couple months, the more I have wanted to go back to GNOME. The privacy concerns I have about the big tech companies continues to nag at me and there is something about the open source ethos that appeals to me. I may even switch back to Android from iOS if this works well.

I will still be tied to the Apple ecosystem with my work laptop. That’s both good and bad as I think about the few apps that have held me back from making the switch full time. The only alternative would be to switch to Windows, which is never going to happen. I haven’t used Windows since 2004 and considering what Microsoft has done with tracking in Windows 10…

There are a handful of apps on macOS that just don’t have a Linux equivalent, or if they do, aren’t close from a usability experience. The last three are the big ones for me. I also see the irony in that those three apps are some of most expensive applications I’ve purchased through the Mac App Store. You do get what you pay for and I really shouldn’t be comparing these, especially the last two which Apple has featured as apps of the year previously, to free and open source apps. I should be grateful that there are programmers out in the open source world making applications and offering them without charge rather than trying to compare them to Mac equivalents.

In no particular order, the apps I’ll most the most:

Messages

I love text messaging from my desktop (and the immediacy of the notifications). I’m old, shouting Get Off My Lawn and just don’t like tapping on virtual keyboards compared to a real keyboard hooked up to a computer. But I can live without this.

Status: Can live without this.

Pocket

The web client is pretty good and I’ll probably continue to use the iPad as the primary reading device for Pocket. I can live without this. Firefox has a save to Pocket add-on that works just fine.

Status: Can live without this.

Reeder

Reeder is my RSS reader of choice, and there are a number of RSS readers available on Linux. Feedbin, the replacement service for Google Reader that I pay for annually, also has a decent web interface. New links open in a tab in the browser instead of Reeder’s readability feature. I’ll miss Reeder.

Status: Can live without it.

Update: I’ve found FeedReader in the Fedora 25 repositories. Version 1.6 is in the repo, but the developer has also made a Flatpak available for version 2.0 that was released two days ago and I’m now running. A few thoughts:

  • This has fantastic usability. Almost to the level of Reeder. This is a slam dunk as far as RSS readers go.
  • I installed the Flatpak because version 2.0 adds support for both Feedbin and Pocket as a read it later service. Feedbin suport is working great and after upgrading from the 2.0 beta to 2.0 final, Pocket support is working flawlessly. FeedReader automatically added Pocket as a service since I had it configured in GNOME Online Accounts.
  • A big thank you and shout out to the developers for taking the time to release a Flatpak making it easy for users to upgrade to the latest version.

Updated Status: Found a replacement that is just as good as one of the best Mac apps.

1Password

Considering all the work I did over the Christmas holiday to change weak passwords to strong passwords and removing duplicates, and also the integration with iOS, this is a big loss as there is no Linux client for 1Password. There are a few password management alternatives on Linux, but I don’t know how good they are. Ryan C. Gordon aka icculus did write a 1Password script for Linux that may be worth checking out: https://icculus.org/1pass/

Status: More research needed and may just need to switch to Encryptr or Enpass.

Tweetbot

Ouch. This one hurts. I love Twitter, it’s the only social network I’m active on. I love syncing my Twitter reading experience between all my devices, which Tweetbot does better than any other application out there, regardless of platform or operating system. I’ve installed Corebird on Fedora and it’s ok, but it’s not Tweetbot.

Status: This one hurts. I can probably confine myself to Twitter on iOS and use Pocket to save and read links.

Ulysses

I love, love, love writing in Ulysses. It’s hands down the best writing app I’ve ever used after trying Scrivener, Hemingway and others. The iCloud integration is great, making it easy to jump to and from other devices, including iOS. I am using Ulysses to not only write for my blog and journal (then importing into Day One) but also as an Evernote replacement after Evernote screwed everyone over with their privacy settings (though they would later backtrack, I’ve lost all trust in them). Like most of the great Mac apps, they’re Apple only. If I’m writing anything, I’m always starting in Ulysses.

I’m using Dropbox Paper right now to try it out as a replacement for Ulysses, and while Paper is close, it’s lack of true Markdown support while writing bugs me. It’s not too bad if I open it in its own browser window and then use it in its own workspace – this makes it feel like more of a writing app and not a browser. I’ve spent significant time learning Markdown for both Ulysses and Day One, so Dropbox Paper missing real keyboard shortcuts for Markdown kind of sucks (some work, like strong and italics, but others, like headings, don’t). I’ve installed the Markdown plugin in WordPress, making it easy to copy and paste drafts from Ulysses to my blog or to Day One. It is possible to export Dropbox Paper as Markdown and after a cursory glance there are some decent looking Markdown editors available on Linux, so there may be hope.

Status: Can probably live without it. But I’m not happy about it.

Day One

This is probably the biggest one for me. If I love Ulysses, I love Day One more. And like Ulysses, Day One is exclusively in the Apple ecosystem. Ironically, I don’t write in my journal nearly as much as I should. But I love the integration with IFTTT and use it to track all of my exercise entries from Endomondo. I spent an hour looking at journaling options on Linux last week, and there are a couple, but I don’t see a way to sync the entries between computers, which is a must have feature. One option is to continue to use Day One on my work laptop or use a Markdown editor on Linux, save in Dropbox, and then import. I’ve also come across jrnl, a command line journaling app that says it works with Day One, but I really love the user experience of Day One’s app. This one hurts the most – Day One was one of the first apps I ever bought in the Mac App Store and I have years of journal entries in there.

Status: Ouch. I really don’t want to miss this. I’m not ready to start journaling in another app, so I’ll probably just write drafts in Dropbox Paper and then use my work laptop for journal entries.

Why I’m going back to Linux after five years of using macOS

I’ve been a supporter of the Electronic Frontier Foundation since 2004. Their work on privacy, free expression and technology are all things I am passionate about. For the last year or so, I have become more concerned with privacy issues in technology. The rise in big data and how everything is tracking everything we do has given me significant concerns. I’ve been giving a lot of thought to which ecosystems I want to stay in. I’m not going to say I trust any of these technology companies, but I can control (or minimize) my footprint with some of these companies.

Last year I took a number of steps in this direction:

  • I deleted my Facebook and Instagram accounts. I don’t think I need to go into detail here, but Facebook isn’t something you would ever equate with the word “privacy”.
  • After Evernote said they would access your notes and data (only to backtrack later) I quickly stopped using Evernote.
  • I’m paying cash for most of my personal purchases and now shopping local and not online – even if I have to pay a bit more for things such as records, books or cycling gear.
  • I went through and deleted over a hundred online accounts over the Christmas break and used a password manager to make sure I wasn’t using duplicate passwords online and also that I was using secure passwords.
  • I’m no longer using Flickr (and Yahoo services in general) for my photos and I have a tough decision to make about whether I delete that account and remove access to the photos there. (Wikipedia using a number of my Green Bay Packer photos under a Creative Commons license).
  • I switched to DuckDuckGo instead of Google as my default search engine.
  • As much as I’m intrigued by Amazon’s Alexa and Google Home, I won’t buy a voice activated device. Just think about what data it knows about you – what smart devices in your house, what your saying around it – and the recent story in the news how a police department wants the data scares the shit out of me.
  • I’m not using TouchID on my iOS devices. Courts have ruled multiple times that your fingerprint is not protected under the Fifth Amendment – but a passcode is.

Yes, I sound paranoid. But at the end of the day, this is my decision and my choice. I may not have anything to hide, but I don’t believe just because we have the technology means that it always needs to be used to collect everything about you. While I will never be able to erase everything about me online or with these technology companies – nor would I necessarily want to – I can control with whom I do business and make conscious choices about it. This way I can be eyes wide open that yes, I’ve been using Gmail since it first launched and that Google knows almost everything about me. But that’s my choice to stay within Google’s ecosystem (for now). even if I start to use less of their services, such as switching to DuckDuckGo for internet searches.

I stopped using Microsoft Windows in 2003 when I switched to using Linux full time until about 2012 when I started using macOS after buying my first MacBook. I love Apple’s hardware and I like macOS – the same Unix internals underneath, lots of polish, and excellent apps. Everything just works – you don’t have to fiddle with video card drivers or wireless. But you will have to do things the way Apple wants you to (see: iTunes). Integreation with iOS is great – answer phone calls on your Mac, reply to text messages. But who knows what Apple is tracking as well as the apps you’re using (I’m looking at you Evernote). And don’t get me started on the Touch Bar on the new MacBooks. (No Escape key? Really?)

So I’m going back to using Linux on the desktop after five+ years away. There is no question that the macOS user experience is significantly better. But using the GNOME desktop on Fedora is pretty close and gets better every release. I’ll know my computing experience is secure and private. I’ll probably share some thoughts on what key applications I’ll miss most in a separate blog post. I’ll still need to use macOS at my day job, but I can control what I use at home and have the peace of mind that nothing is tracking me (outside of what’s in my web browser) when using my own computers.

Dwayne Crooks on learning Python efficiently

Dwayne Crooks wrote a fabulous blog post this week with his advice on learning Python efficiently.

Being a year into my journey, I couldn’t agree with him more. He lists five mistakes that hamper our ability to learn efficiently. Below I’ve listed his five mistakes with where I am in my journey in italics.

  1. Reading a book cover to cover. I strongly agree with this one. This was the first mistake I made a year ago when I decided I wanted to learn Python. I bought Think Python and Learning Python and quickly realized I am not the type of learner who can learn from reading and trying to follow along.
  2. Diving in without a plan. Check! Yes, I have a plan. I know what I want to build. Whew.
  3. Failing to narrow your scope. I think I’m ok on this one? Let’s just quote this one in full from Mr. Crooks:

    Having clear boundaries makes it easy to decide whether or not a new resource is worth your time. That’s why learning Python by trying to build something in it is a great way to go. You’d realize how much of Python you don’t need to know in order to accomplish any one task. You’ll find that the more you narrow your scope at the beginning, the more you’ll learn and the faster you’ll progress.

    The challenge for me in understanding this one, is if you’re new to Python, how do you know where to draw the boundaries? When I get stuck, I revisit some of the classes I’ve taken or search Stack Overflow. I quickly realize how much I don’t know when I find a new way to do something or come across something related but that I don’t need. But knowing what I want to build probably expands my scope instead of narrowing it.

    • Trying to learn 2 (or more) things at the same time. I’m being very careful with this one. I want to have a prototype of my application working before I move on to my next class, Python for Entrepreneurs, which will teach me how to build my application using Pyramid. The course will also cover CSS, Bootstrap and more web technologies. Where I’m struggling though is on my prototype – do I just build the prototype or do I try and learn some basic SQL, which is what the web app version will need? My head has been in the right spot on this one as I’ve tried to avoid learning SQL up until now.
    • Spending too much time studying before you have experience doing. Mr. Crooks hits this one on the head and is basically describing me: Because we’re afraid to fail, we want to know what we’re doing before we ever try. So we spend a lot of time learning before ever trying to apply any of it. I’m wired to be a “learner” and do a deep dive into anything before I pull the trigger. Whether it’s a ton of research before buying a new TV or learning a new skill, this describes me well. But I think I’m ok on this one. If you were to look through my Github repo for nflpool (please don’t), you would see a mishmash of Python. There’s probably 25 files in my repo that is basically just a scratchpad for me trying to figure out how to parse JSON or trying to write a for loop to get the results I need. There’s nothing Pythonic in there (yet). For example, I’m not using functions like I should. But once I get the different pieces working, I’ll refactor it the right way. You can argue whether I should be starting it right or not, but I’m diving in and trying to figure it out piece by piece. You have to start somewhere…

Mr. Crooks then goes on and shares his five steps to get started. I’m happy to see I’m on the right track.

One Year of Python

It was Black Friday of 2015. O’Reilly put on a sale of their programming ebooks and I was finally ready to take the plunge and learn Python. I bought three books:

I then signed up for a Coursera class, Python for Everybody, taught by Dr. Charles Severance and started the class. I was ready to do this. I needed a hobby. I had a problem to solve.

Then real life got in the way. A few months earlier, we started building a new house. In January it was time to sell our house, which meant hours of work. Then in February, we moved.

I put learning Python on the back burner. Before I knew it, it was July, and another six months had gone by. It was now fantasy football season and that was the problem I had to solve. I needed a program that would keep track of all football statistics and standings and automatically calculate each player’s points. It was time.

I re-started the Coursera course and spent the time. I was easily spending twenty hours a week reading the course materials, watching the videos and doing the homework.

I confirmed what I knew about myself: I learn best by doing, not just reading or watching videos. The books I had bought were helpful, but just sitting down and reading them, trying to follow along and do the exercises was difficult. Python for Everybody on Coursera was great.

I finished that and moved on to Python Jumpstart by Building 10 Apps by Michael Kennedy, which I had purchased in early 2016 via a Kickstarter campaign. I’m almost done with that a year after I started this journey.

Learning to code in Python is hard. I don’t have a background in computer science and with some of the concepts that the books and courses teach I just don’t have the base knowledge necessary. This sometimes makes it harder and takes longer to understand the concepts. I’m lucky that my wife has worked professionally as a programmer in multiple languages, including Java and SQL. But I drive her crazy when I ask her questions about concepts I clearly don’t understand. I use the wrong terminology or fail to grasp what I’ve been taught.

I don’t know how much I’ve retained from the classes and books. I’m trying to build my application in parallel with my learning. I’m convinced the only way I’m going to learn is to build something, which is a piece of advice most often found online for people aspiring to learn programming. I’m constantly hitting up Google and Stack Overflow when I get stuck. I’ll copy bits and pieces of code from these search results and I’m always doubting whether I understand what I’m copying. I’ve signed up for multiple newsletters and bookmarked dozens of websites with articles on how to learn, code snippets, programming challenges and more. I’m overwhelmed with the concepts I’m learning and I know I don’t understand, let alone use, these concepts.

But I’m going to keep trying. The only way I’ll learn is by building something. The code will be ugly. It will break. And I’ll keep updating it until it works and as I learn more, I’ll make it more elegant.

Here’s to another year.

Importing Team Data into NFLPool

Last weekend I discovered how to pretty print the five JSON files I get from MySportsFeeds. This was helpful to understand just how much data is nested within each file. I also spent a good chunk of the weekend writing in a notebook. I mostly did some data modeling on what each table in the database should store and what their primary keys would be. I also captured things I need to research and started breaking the project into chunks. As I tweeted out over the weekend:

Monday was a holiday so I did the first four courses of Python Jumpstart. I took a break and went back to the JSON files I had worked with. My goal was to build with what should be the easiest table and pull the team data out. This is a dictionary that includes the team name (Texans), city (Houston), abbreviation (HOU) and id (64). The ID number is supplied in the JSON feed and is unique, so I will use that as the primary key. There will be two more columns in the table for conference and division, but I wanted to deal with that later.

I wrote a for loop to try and pull out each team’s information. I quickly got stuck and nothing was working. At one point, the loop I had written worked, but only pulled out the data for the first ranked team. I showed my wife my code and she pointed out that it wasn’t iterating in a loop.

I was stuck for two nights working on this after dinner. I finally stepped back and modified my pretty print Python program and started breaking down all of the information in the JSON file again. I figured out what was a list and what was a dictionary and what was nested where. (It looks like I didn’t commit this to the git repo, oops! Will have to fix that.)

After doing this last night, I found the list I needed to work with. I then re-wrote my for loop and I was able to iterate through all 16 teams in the AFC:

for afc_team_list in teamlist:

afc_team_name = data["conferenceteamstandings"]["conference"][0]["teamentry"][x]["team"]["Name"]

afc_team_city = data["conferenceteamstandings"]["conference"][0]["teamentry"][x]["team"]["City"]

afc_team_id = data["conferenceteamstandings"]["conference"][0]["teamentry"][x]["team"]["ID"]

afc_team_abbr = data["conferenceteamstandings"]["conference"][0]["teamentry"][x]["team"]["Abbreviation"]

print((afc_team_name),",",(afc_team_city),",",(afc_team_id),",",(afc_team_abbr))

x = x + 1

I then copied and pasted and did it again for the NFC. I did try, unsuccessfully, to modify the conference list – “conference” – so I could just write one for loop instead of one for each of the two conferences. But it was working, so I’ll leave it for now. (I’m sure my code is ugly, but hey, I’m just starting).

After that it was all about writing the SQL insert statements to put this into a SQLite3 database. (For now, later it will go into MySQL). That took me a an hour, but at the end, I got it working and was even able to add the conference name to each row.

Next up, I need to take the data in the Division standings JSON file. In it is stored the division name for each division in a conference: AFC/AFC-East. I’ll need to write a for loop to grab it, slice it to remove the “AFC/“ and then stick that in the Division field for each team in the Teams table. I’ll also need to stop dropping and re-creating the table each time I insert data, but it’s working.

Progress!

Building the NFLPool webapp – Starting with JSON

I’m glad I started with the Python for Everybody specialization at Coursera before jumping into Python Jumpstart by building 10 Python Apps by Michael Kennedy. Mr. Kennedy moves fast. I’ve completed the first four apps and it’s good to get a refresher on the information I learned in Python for Everybody.

I also spent part of the weekend sketching in a notebook. I did some brainstorming about the database design I’ll need for NFLPool. I learned one of the bigger differences between MySQL and Postgresql is that MySQL does not have the ability to use foreign keys but MySQL is much faster. The lack of foreign keys may make the design a bit tougher, but more on that later in a different blog post.

I also sketched out some ideas for the functions I’m going to need to write so I’m not writing the same bit of code over and over again. From there, I created a to-do list of things to start working through. I find this whole process of building an app overwhelming. I never thought I’d be using paper and pencil so much, but I’ve found it helpful to break this into smaller chunks and attack them one at a time.

Then I started working on the import process for the JSON. This quickly derailed as I realized just how many stats MySportsFeeds captures from an NFL game. That quickly turned in to writing a JSON pretty print statement so I could see how the five different JSON files nested their dictionaries.

I currently download five JSON files every Tuesday via a cron job with all the statistics. I know my app won’t be ready for the 2016 season and my hope is by having 17 weeks of data, I can re-create the season to test my app to make sure it’s scoring each player correctly as we move through the season week by week. When I download the JSON via curl, it includes all the web headers, such as:

HTTP/1.1 200 OK
Date: Wed, 21 Sep 2016 12:16:07 GMT
Server: Apache-Coyote/1.1
Cache-Control: must-revalidate, no-store, s-maxage=0, max-age=0, private
Access-Control-Allow-Headers: Origin, Content-Type, Accept, Accept-Encoding, Accept-Language, Authorization
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Content-Encoding: gzip
Access-Control-Allow-Methods: GET, OPTIONS
Content-Type: application/json
Set-Cookie: JSESSIONID=B7548F2309747418749B5421282A5E08; Path=/leaguemanager-web/; HttpOnly
Vary: User-Agent
Connection: close
Transfer-Encoding: chunked

And then the JSON starts right after that with curly braces. I was proud of myself as I wrote an if statement to load the file, read the lines, and load the JSON when finding the curly braces. Then I wrote code to first print out all the statistics categories (commented out below) and pretty print all the JSON:

import json
import pprint
import os

#Open the JSON file that includes headers


#Change the name of the file to open to match the query below:
with open('json/20160921-division-team-standings.json') as file:
    alltext = file.readlines()  #Put each line into a list

# division-team-standings.json
for lines in alltext:
    if lines.startswith('{'):
        rawdata = lines
        data = json.loads(rawdata)
#        for stat_categories in data["divisionteamstandings"]["division"][0]["teamentry"][0]["stats"]:
#            pprint.pprint(stat_categories)   #Print all the categories in "stats"
        pprint.pprint(data)  #Print the JSON

I had five files to review and I just manually changed the code to the file I wanted and had a code block for each of the files. I know I probably should have just wrote a function, but I was in the zone. (My code probably isn’t very Pythonic either, but I have to start somewhere on this journey). I also know that when it comes time to build the real app I’ll be loading the JSON across the network and not from a local file, but future Paul gets to deal with that.

I also spent some time playing around with the nflgame and mlbgame Python modules. I need to spend some more time with them and I’ll share some thoughts on those in another blog post.

Next class up: Python Jumpstart by Building 10 Apps

I’ve completed the Python For Everybody course taught by Dr. Charles Severance at the University of Michigan on Coursera. All that’s left is the capstone project to put into practice what I’ve learned, but as I’m doing this to learn Python and not for the official certificate, I’m going to skip it. The course is taught in Python 2.7 and I want to shift to Python 3.x.

Python For Everybody was great. The pace and the exercises were perfect for the class. I wish I had realized sooner that there were additional exercises in the textbook that were not part of the required Coursera class. The fourth class, Python and Databases, was intense. The speed of the class was accelerated with teaching you SQL and how Python connects to databases (SQLlite specifically). The homework was much more simple in Python for Databases compared to the first three sessions. You usually had to only make some minor changes in the SQL syntax to complete the grade.

The two things I’m going to need to focus on to have success in building the two apps I want to build are dictionaries (from importing statistics via JSON) and databases. If I walked away from one thing from the Python for Databases class is that I’m going to need to spend some time with paper and pencil and plan my information architecture and database models if I’m going to be successful.

Next I’m going to start Python Jumpstart by Building 10 Apps by Michael Kennedy of the Talk Python podcast. I supported the Kickstarter earlier this year and am excited now that I hope I have enough of a base understanding of Python to tackle this. This will be taught in Python 3.x (yay!) and I’m hoping now that I have that base knowledge, building these apps along with the tutorials included will give the practice I need to later build a real app. It’s also going to go into a little more detail than what I’ve learned so far on list comprehension (which makes my head hurt), BeautifulSoup for web scraping, and Classes.

I also supported Mr. Kennedy’s next Kickstarter, Python for Entrepreneurs. This also has me excited as the second phase of building my fantasy sports app will be deploying it on the web. The description looks perfect for what I’ll need, in addition to learning the web framework Pyramid:

You will learn to build and design your web app

This course will teach you how to build a data-driven web application in Python.

We will:

**• Build our web app with the Pyramid web framework, "the Python web framework that supports your decisions, by artisans for artisans."

**

• Create and connect to our database using SQLAlchemy, the most popular data access layer in Python

**• Learn the core elements of web design including CSS and front-end frameworks such as Bootstrap.

**

Time to get to work.

Web Scraping and Python

I’m flying along in the Coursera course Python for Everybody, from the University of Michigan taught by Dr. Charles Severance. I’ve completed the first two of four courses which give you an introduction to Python.

I’m now on the third course, Using Python to Access Web Data. This and the fourth course focused on databases, are the two key foundations for the web app I want to build. I just finished Chapter 12, which introduces the BeautifulSoup library for scraping web pages. This is going to be huge – I’ll be able to scrape ESPN to find which MLB or NFL teams lead their divisions or leading in the wild card races.

Being on vacation this week, I’ve been able to complete a few chapters and am now a couple weeks ahead of schedule. I’m tempted to pause and see if I can take what I’ve learned with BeautifulSoup and actually write some small Python programs to actually scrape and print the results. It might be good practice to reinforce what I’ve learned.

The next two chapters are key as well. XML and then the one I’m most looking forward to: JSON. I’ve already signed up for a developer account with MySportsFeeds and am receiving JSON data for player stats, teams and conference standings. I’ve spoken in the past with one of their lead developers and they don’t currently keep statistics for wildcard or playoff standings, so I’m going to need to use BeautifulSoup in my app to get those. I’ll also need to make a decision if I’m going to use that JSON data for player stats and query against it myself or just use the nflgame or nfldb libraries that have already been built. The biggest challenge their is that both of those libraries are written in Python 2.7 and I really want to write my apps in Python 3.x.

I know I’m getting ahead of myself. Every time I learn something that will be applicable to the app I want to build and I talk to my wife about it, she tells me to slow down. My mind is always racing with how I can apply what I’m learning and how it will affect the architecture of the app. Some people say the best way to learn a programming language is to build something and learn as you go. I can’t wait to put all this Python learning to practice.

Python for Everybody at Coursera with Dr. Chuck

tl;dr: I’m spending the time to learn Python primarily using the free course available at Coursera taught by Dr. Charles Severance of the University of Michigan and am really enjoying it.

The good news: I’ve committed to my goal of learning Python and I’m sticking to it.

The bad news: I haven’t been writing about my progress as much as I should be. Hey, learning this stuff is hard and takes time. That’s my excuse and I’m sticking to it.

As I mentioned in my last post, I re-enrolled in the Coursera course, Learn to Program and Analyze Data with Python, from the University of Michigan taught by Dr. Charles Severance. It includes five courses, with each one lasting about six weeks, with the last course being a capstone project. You can audit the course for free or pay for an official certificate and I’m auditing the course.

I flew through the first course and am now 60% of the way through the second course, Python Data Structures. In the first course you learn the basics of computer science and Python – print statements, expressions and variables, loops and functions.

In the second course, Python Data Structures, you continue to build on that, learning how to slice slice strings, searching within strings, and working with files. This is where it is finally coming together and you’re writing a real program for homework assignments.

I am enjoying Learn to Program and Analyze Data with Python on Coursera. I find the professor’s video lectures easy to follow and understand. The conversational tone is helpful and I appreciate how he talks about a concept and also shows slides in the video that he draws on to help illustrate his point. I believe this helps those who learn by listening and those who learn visually.

Here is an example of the second course’s syllabus for week three that I just completed. As you start the week, you easily get an overview of the week ahead:

  • The lecture videos you will need to watch and how long they are
  • A wiki page of notes related to the lecture created by students
  • The assignments you will need to complete
  • A video showing the worked exercises to watch after the assignment is completed
  • Bonus (optional) material for the week

There are two downsides to the course. The first is that it is being taught in Python 2.7. One of the best parts, though, is that Dr. Severance has made the course and the book available in a Creative Commons license, which is awesome. You don’t necessarily need to do it on Coursera as the course materials are available on his website at Python Learn with the videos also available on YouTube. If you visit the site, you’ll see the book has been rewritten for Python 3 and the materials are now being updated and I’m hopeful that the course on Coursera will be updated in time as well.

The second downside is more of a personal thing. The course has a neat autograder online:

As you can see in the screenshot in the upper left, it tells you what to do to complete the assignment. Just below that is the editor that gives you some code to start. You edit the code and press “Check Code” and the output is displayed in the upper right box. If the output matches the assignment, the grade is automatically updated on the server.

I learned in this week’s assignment that I need to write my code in an editor and save it rather than just doing it in the browser. I had to go back and re-watch the worked exercises for the previous chapter to review the code from the last homework assignment as this week’s homework built upon it. I won’t make that mistake again! Also, if you are really stuck with a homework assignment, there is a discussion forum where you can ask questions and get hints to what to focus on to complete the assignment.

As I’ve worked through a couple of the books I’ve bought and proceed through this course, putting the concepts into practice is the hardest part. While I understand the concepts, or at least think I do, putting it into practice and writing a real program is where I struggle. As frustrating as it can be to go back, re-read a chapter or re-watch a video when I can’t write the code, I firmly believe I am going to learn best by practicing writing actual code over and over again if I ever want to meet my goal of writing the program to calculate the fantasy pool scores. I am finally making the time commitment to learn Python and enjoying the process thanks to Dr. Chuck and Coursera. (You can follow Dr. Chuck on Twitter at @drchuck).