December 23, 2018
I finished the Garfield Comic Of The Day Webscraper, and posted the results on the site. It will now post on Twitter at 11 AM everyday, and spread the joy of Garfield comics :)
December 13, 2018
I again made little progress to my projects this week due to personal things. I am in the process of making a Garfield themed fake website, and have been actively trying to redo the Python based Express or Local project using an example project I found. I also am learning Ruby on the weekends, and am planning to do a Yelp related project based on restaurants in Sunnyside, Queens.
December 6, 2018
I made small progress this week towards Python and C#, since I've had to deal with web hosting issues this week. I am currently in the process of making a Garfield themed fan website to showcase the skills I have gained from Chapter 4 from the HTML+CSS and JS books. I have also begun learning Ruby on the weekend as well, and updated my GitHub repo with all my progress as of late, but will rehost on Repl.it when I get the chance so that beginners can try out my code on their own as well.
November 30, 2018
I posted on the MTA related Google+ group forum to ask how to hone in on the specific 7 NYC Train data. I also switched my focus on the Bull or Bear Webscraper C# app to try to scrape the CNBC website, and then switched the focus back to trying to utilize the Bloomberg C# API since they DO provide examples within the same download, and its worth a shot. I updated the Python and C# sections with new examples from Chapter 7 and Chapter 2 respectively. I also uploaded all of my progress to my GitHub repository called "scripts", and will upload my C++ progress as Repl.it files as well to be posted on the Programming Projects section as well.
November 21, 2018
I was only able to work on Python this week, but made a bit more progress on the MTA Express or Local App, since I cleaned up a lot of the code into separate functions. The biggest issue I'm facing is trying to make the data only look into the 7 Train, by indexing into feed.entity with a specific value. I've been trying to just brute force it and see which value works, but I might be better off actually looking into some of the .txt files provided by the MTA. I also finished the Visit Monaco website, but am trying to fix the styling of various HTML elements throughout the pages present.
November 16, 2018
November 9, 2018
November 2, 2018
I updated the "Visit Monaco" website with more object orientated specific methods and properties (String and Number Objects in particular). I also have been working on fixing the time module issue for the MTA Express or Local App. I also have begun learning C# 7.0 within the last two weeks, and posted my latest progress work, as well as a Bloomberg related C# 7.0 webscraper project.
October 26, 2018
I updated the "Visit Monaco" website with more JS based scripts, and also included more of my Python Chapter 7 examples in the Programming Books Progress website progress page. The MTA app is turning out to be more or less a database SQL app. I also posted C# examples from the OReilly "C# 7.0 in a Nutshell" book in order to learn it to possibly use the skills to apply for a related job in the Dev department at work.
October 19, 2018
I made better progress on the MTA based Express or Local App by using the .txt files found in a related forum post that the MTA didn't include in the initial .zip download. I also worked to include Browser Object and Document Object model ideas into the Stats section of the Visit Monaco website. I also have been working on the "qt Beginner Guide" that's provided by the official documentation for qt in order to become more familiar with it so I can further help the Lubuntu team, and will include related examples in the Programming Book Progress section of the site. I also have started the "C# 7.0 in a Nutshell" OReilly book in hopes to possibly utilize the skills later for a related job at work. I also made further progress in the "Automate the Boring Stuff with Python" with more regex examples from Chapter 7.
October 12, 2018
I've made better progress with adding more Chapter 3 ideas for the Visit Monaco website by including Browser Object Model, and Document Object Model ideas, as well as solidifying object orientated scripts in the stats section. The MTA Express or Local app is taking longer than expected since I can't figure out how to interpret stop_id and train id values to make it feasible yet. I found two related projects, and messaged these devs for advice, and posted on the MTA Google + group. My Lubuntu D23 patch was finally committed by Wxl earlier this week, making my first contribution official. I also updated the programming projects section with more "Automate The Boring Stuff with Python" related chapter examples.
October 5, 2018
I updated the Visit Monaco website's stats sections with more arrays and object orientated programming ideas. I also have been making progress on the Express or Local App by displaying information about the first train in the dataset every 5 seconds in console. I hope to soon demystify the route id's and stop id's to make it clear where the current train actually is in relation to an actual NYC subway stop. I also did my first commit to Lubuntu's development, which can be found here:
September 28, 2018
I have made more changes regarding object orientated JS ideas to the Visit Monaco website, and also uploaded my progress for the "Express or Local App". I have been actively trying to help the Lubuntu community with two related bugs using C++ and qt, as well as Python.
September 21, 2018
I have been making more progress on the Express or Local Python app with the help of IRC members from the #Python channel. I'm getting closer to being able to parse through the unbuffered .proto protocol data stream from MTA, and will work with it to make it useable. I've been also implementing more object orientated scripts for the "Visit Monaco" website as well.
September 14, 2018
September 7, 2018
With the help from some programmers on the #Python IRC channel, I have been able to run the Hockey Webscraper successfully. The only thing remaining for this project is to add team logos as another dictionary of .jpeg pictures within the same directory. I have also continued to make progress in "Automate the Boring Stuff with Python", and also have updated the "Visit Monaco" webpage with a few more scripts in the Stats webpage. I have also been going back and forth on Beeware's Gitter page to get more help running the tests necessary for their open source project. I'm aiming to probably start learning GIT as well to help with this website.
August 31, 2018
August 24, 2018
I have been updating the Monaco website to include more scripts on the stats page. I am also becoming more active in Open Source projects such as BeeWare, and am working on running test scripts for this great open source project. I am also working on finishing the Hockey Webscraper to include an FTP link via the Twitter post that will be hosted from the webhosting provider's FTP service via Python's "ftplib" module, and using urllib3. I'm also going to be fixing the switches on my Line 6 PodHD500 multi-fx pedal this weekend, and will post related pics of the completed project.
August 17, 2018
I've been working on the Visit Monaco website, and to update the Hockey Webscraper to be placed on Twitter. The only issue at the moment is to account for Python's FTP module, and with the shared hosting server that this website is on. Once that is included, I can finally include Twitter posts with Excel .xlsx file attachments for the Hockey Webscraper. I also have been working on different Automating The Boring Stuff chapter projects which I will upload at a later time.
August 9, 2018
Uploaded Garden Menu Webscraper Twitter Bot, which basically scrapes the menu of a Brooklyn based restaurant, and reposts the link on Twitter for ease of use by customers. I also worked on the Monaco website this week as well.
August 6, 2018
I updated the Cat Of The Day Twitter Webscraper Bot to actually run at 1 pm every day! This took several attempts since the urllib module is very specific on how its methods are called. I am currently working on adding more features to the Monaco website, and am also working on making a menu webscraper Twitter bot that will actively scrape a Brooklyn based restaurant's PDF menu. I'm waiting for Twitter to approve this bot's account, but should be working soon.
August 3, 2018
July 27, 2018
I am working on the "Visit Monaco, The Smallest Country In The World" website currently. I am making a lot of progress on the Hockey Webscraper at home, and am working on putting the results in an Excel .xlsv file so I can attach it to an e-mail, and send it to the user.
July 19, 2018
I finished the fake website, "The History Of MS Paint", and uploaded the results. I am still working on the Hockey Webscraper at home, and am attempting to use Selenium to automate tasks at work as well.
July 13, 2018
I am making progress on the Hockey Webscraper, but haven't posted the results yet. I created another fake website called "The History Of MS Paint" to demonstrate my skills learned from Chapters 1 & 2 for the HTML & JS books so far. I look forward to doing more project work before continuing in the actual books so that I can reinforce and demonstrate what I've learned so far.
July 9, 2018
Was incredibly sick for the last two weeks, but thankfully feel much better now. I have started working in the programming books at work to continue my progress, and will focus on the Hockey Webscraper first before revisiting the Song Of The Day Webscraper, since that will take more time to really find a solution via Repl.it
June 15, 2018
Still trying to find a capable Python based synth that will work in Repl.it to convert the MIDI sample to WAV for the Song Of The Day project. Made small progress this week due to sickness this week.
June 8, 2018
Updated overall website design, looks good so far. Still trying to make the Song Of The Day Webscraper actually work on its iFrame webpage. Cat Of The Day Webscraper still works, and I'm making progress on the Hockey Webscraper. I might shift focus to purely implementing the MIDI to Audio conversion for the Song Of The Day Webscraper though.
June 1, 2018
Updated the Song Of The Day Webscraper with my current version. It's still not working because I haven't figured out the MIDI to audio conversion process within the same tab. However, its getting closer since I'm using "FluidSynth" to do this, as well as the Midi2Audio Python library, and e-mailed the library's creator with questions.
May 25, 2018
Updated background color to light purple on all webpages. Fixed Python script for Cat Of The Day Webscraper. Still working on making Song Of The Day Webscraper functional without use of MIDI, and currently researching MIDI to WAV converters to allow the user to play it in browser. Will also update guitar section with actual step by step process on how I will fix my Fender Jaguar guitar's input jack with soldering since it's my first time doing so. Wish me luck!
May 18, 2018
Adjusted all web pages to include mobile based viewport meta tag to allow for easier viewing on mobile devices.
May 17, 2018
Adjusted styling on Python programming project webpages, as well as adjusted width of iFrame for scripts.
May 16, 2018
Added my Python programming projects, Song Of The Day Music Webscraper, and Cat Of The Day Twitter Webscraper to the Python projects section. Need to fix the errors in the Repl.it verison of the Music Webscraper.
May 15, 2018
May 14, 2018
Adjusted styling on Musimatic's various webpages to include bigger headers
May 11, 2018
Fixed script issues on Collar website for Spin To Win webpage, and adjusted styling consistently throughout the website. It's complete, and I like how it looks with Bootstrap. I tried adding a dropdown menu nav bar with Bootstrap, but this is not necessarily working out easily. Will try again next week.
May 4, 2018
Adjusted styling of old fake website, "Slings", to appear more consistent. Also updated "Collar" with a Profiles webpage this week.
April 27, 2018
Updated the website to include consistent styling, and uploaded the actual fake websites with their current state of progress. Looks good so far.