How to fix skewing problem when rotating geometry in Maya

Duplicating mesh to speed up model creation process, but sometimes Maya does not like that.

The mesh might skew while transforming, rotating or scaling, which can be very time to consume to fix and deform the mesh to an unfixable state.

This tends to happen due to Maya miss-calculation of grouping.

To fix this select the mesh that is skewing, then go into the outliner and move the node to the top-level removing it from any group.

This will fix the issue and the mesh will transform, rotate and scale normally.

Have a question? Leave a comment below.

Rust on Mac

Rust is systems programming language gaining popularity due to its “safe, concurrent, practical language”, being memory safe while maintaining performance is the reason why so many people are adopting it as their systems language of choice, winning the first place for “most loved programming language” in the Stack Overflow Developer Survey in 2016 and 2017.

Continue reading

Digital Sculpting

The first posts out my new series of Notes, to help you understand computer graphics better.

Digital Sculpting is one of the most artist ways of mesh manipulation and most preferred way of creating organic forms example character, environment props, rocks, stones, trees to name a few. It has given artists the freedom to interact with the mesh like traditional clay.

Continue reading

Box Bounce Animation (OpenToonz)

While testing OpenToonz 2D animation software, I a hand drew a box squashing and stretching animation sequence with my Wacom Bamboo tablet (CTH-661), which helped me test how OpenToonz handled the tablet and responded with pressure sensitivity, It was fun creating this but the software had issues registering pen clicks at times and rendering certain brush strokes.

Playblast in Blender

Final render can be time-consuming and be taxing on the hardware, It is not recommended when you have to show mechanics or need a much quicker or instant render. An alternative to Final Render is Viewport render also known as Playblast amongst animators.

Viewport render is quick as it renders with OpenGL. It also gives an option to render parts that are never visible in the final render like empty(null object) letting animators display rig mechanics in their showreels and teammates.

In Blender the viewport render is known as “OpenGL Render”, it gives you two options the first is “OpenGL Render Image” that renders the viewport as still, the second one is “OpenGL Render Animation” which lets you render the animation.

To render a still, go to Render > OpenGL Render Image

You can also go into the viewport header and click on the shortcut instead of navigating all the way to the menu.

The default scene should look something like this.

By default, it renders the complete viewport it is similar to taking a screenshot.

Controlling the viewport render is quite simple, the settings are in the N-panel. Things like Ambient Occlusion and MatCap can also be assigned and it will be part of the viewport when executed. You can also disable displaying of certain elements like Empty by tick marking the “Only Render” switch.

A customized viewport render looks like this of the “default scene”.

Do you need a CDN for a blog

I find a lot of blogs use CDNs these days this made me wonder if they seriously need a CDN or is it just because they can get one. What felt weird was how personal blogs used CDN, where the content was mostly text. I believed strongly that they used CDNs because the website was powered by a heavy CMS or was served by a slow server, because of my previous experience of hosting on a slow shared hosting service and use of WordPress, thank you iPage for that experience. Now I use Digital Ocean, which offers VPS(Virtual Private Servers) called as droplet and gives you the option to choose from a wide verity of locations, so you can have a server close to you or your visitor.

It made sense to use a CDN with WordPress because of how there are too many clients sided files being loaded and it turns into an override from to infinity and beyond thanks to plugins and legacy code writing people.

What I found instead was even static sites used CDNs which was quite of a shock, because none of those sites were image heavy, they had quite lightweight client side scripts and stylesheet.

To such sites all a CDN would do would offer down time when the CDN service hit down time or had an issue with a specific instance that handled these clients files.

I wanted to research on why this was a trend and the results are quite interesting.

The reason

It is not a surprise that a lot of sites especially the new ones what to be more competitive and be part of the Alexa master race.

The main goal of these websites is to reach as wide of an audience as they can and will do anything to do so. The authors, publishers, and owners want to do any optimization in the client or server side without understanding if it is needed just because a click bait article describe to them using a CDN makes your site blazing fast while is true, it does not make the content or subject great.

In my opinion, a blog succeed when it has great content that is a good read and grows when such reads are served constantly. I think most people learn this only after a bit of an experience and will be misled and given wrong information till clickbait is the trend to get views.

I can understand a publisher’s mind they want to be out there and seen, noticed and seen some sites even personal blogs that turned completely clickbait just to gain views and immediate popularity, while other who had blogs gave up just because it got harder when it came to maintaining one and decided to shut it down when it was difficult. Yes, this is a sad reality, people give up too fast and want everything instantly.

When to use one

You might be already familiar with the concept to a CDN, if yes, then skip this if not then continue with the read.

A CDN has several servers or you can say instances in different locations and replicate the data from the provider(client) and replicate it onto those servers/instances, these serve then serve the content from the closest server to the visitor reducing the amount of latency and data travel on a given network. I have written a much [detailed post on this topic]

Example

Without a CDN

The server is in the US, the visitor is in Singapore, the visitor is served directly from the US server.

With a CDN

The server is in the US, the visitor is in Singapore, the visitor is served from the closest server which may be in Singapore or in some part of Asia, this means it is served by a server inside of Asia.

These days CDN services offer servers in many countries, letting the visitor be served by a server inside the country, for example, there can be servers in India that in both Mumbai and Delhi, so visitors of Mumbai will be served a server in Mumbai.

Do you need it?

If you have large amounts of posts or data being served and is large files or image heavy then you must go with a CDN otherwise you don’t need one if your server can handle decent traffic and the page load time is not very long

How to create a sketch app in JavaScript using P5.JS

I have been learning P5.JS for a while now and though it would be nice to share the process and progress in form of a tutorial, which will motivate me to continue learning and give others access to it.

We will learn how to create a sketch app. I have kept it as basic as I can, this makes it easy to understand and quick to create.

To follow this tutorial you need to know basics of HTML and JavaScript.

Let’s start with What is P5.JS?

P5.JS is a javascript library based on Processing which was originally written in Java, for creating interactive applications, the reason to create such a framework was to make coding accessible for artists, educators, and beginners, over the years there have been many ports of this framework in different languages and P5 is one of them.

First, we need to get P5.JS(minified version), we are going with minified version because it contains the complete library in the smallest footprint possible.

Creating the bare-bone structure of the project

index.html
lib/p5.min.js

We will then create run.js, this file will contain all the code we will be writing.

Our project should look something like this.

index.html
lib/p5.min.js
run.js

Time to write some HTML in the index.html file.

 

<!DOCTYPE html> 
<html> 
<head>
<meta charset="UTF-8">
<title>Sketch App</title>

<style> body {padding: 0; margin: 0;} canvas {vertical-align: top; border: 2px solid #000;} </style>
</head>
<body>
<script type="text/javascript" src="lib/p5.min.js"></script>
<script type="text/javascript" src="run.js"></script>
</body>
</html>

As you can see above we have the p5.js library before run.js, since web browsers read code line by line it is important that the library is interpreted before run.js, otherwise all the code we have written will contain error because the web browser does not know what we are trying to do.

In P5.JS all the naming convention is followed based on what the function does and what it contains, this makes it easy to understand to a beginner.

It’s time to edit the run.js file.

The first function will be setup(). This will contain all the code that will initialize the program more like rules and instruction to follow.

function setup() {
}

We will create the draw() function below setup(). The draw function will contain all the drawing tasks that will be running inside the canvas element that we will create in the next step.

function draw() {
}

The code should look something like this.

function setup() {
}

function draw() {
}

Creating the canvas inside setup() using createCanvas() function.

The createCanvas() element takes two parameters, width, and height.

function setup() {
createCanvas(800, 800); // width, height
}

If you open the web page you will see a white box with a size of 800 x 800 surrounded by a border of 2px.

You can change the background color using the background() function that takes one parameter.

function setup() {
createCanvas(800, 800);
background(85); // you can choose between 0 and 255.
}

Let’s add some functionality to draw() which will make the magic happen.

function draw() {
if(mouseIsPressed) {
rect(mouseX, mouseY, 2,2) // X location, Y location, weight, height
}
}

As you can see in the above code we have an if statement that has the mouseIsPressed function, this function checks if the mouse is pressed, using the if statement we are telling it to draw a rectangle with the rect() function on the X axis and Y while maintaining a width and height of 2px.

With the fill() function, we will make rect have its interior have white color.

function draw() {
if(mouseIsPressed) {
fill(255); / Choose between 0 to 255
rect(mouseX, mouseY, 5,5) // X location, Y location, weight, height
}
}

We can remove the stroke around the rectangle by adding a noStroke() function, this will remove the stroke surrounding the rect we draw.

function draw() {
if(mouseIsPressed) {
fill(255); // Choose between 0 to 255
noStroke();
rect(mouseX, mouseY, 5,5) // X location, Y location, weight, height
}
}

The project is done. Good job!

The complete code in run.js should look like the snippet below.

function setup() {
createCanvas(800, 800);
background(85);
}

function draw() {
if (mouseIsPressed) {
fill(255);
noStroke();
rect(mouseX, mouseY, 5, 5);
}
}

This is how the final version should look.

You can download the completed source code of this project from GitHub.

Happy Coding and Happy New Year.

Bye 2016

I see a lot of people say 2016 was bad year. It is to a certain extent, but what I believe is it was a good experience and feel that 2017 will be the year to correct previously made mistake and time to be present in the moment as it is the greatest present 🎁, right now that money can’t buy and is in short supply. So live in the present and be a present 🎁 to someone.

How you spend your hours decided how your day goes, how you spend you days decides how months as they decided how the year ends for you. — Darryl Dias

Hope you enjoyed the read, Happy new year 😊

Jaxx wallet

The days are gone when you had to manage Blockchain assets manually and in the process learn new systems, wallets and interfaces. Decentral launched their new wallet Jaxx which supports highly valued Blockchain assets (BTC, ZEC, LTC, Doge, Dash, ETH, ETC, DAO, REP) and continues to add more cryptocurrencies based on the demand of the communities.

It also ships with ShapeShift integration this means you can exchange the cryptocurrencies without exiting the wallet. (For example, if you are interested in investing in ZCash by exchanging it with Bitcoin.)

Security wise it is quite secure because it keeps the private key on the device and not the centralized servers.

The best part is, it supports all the popular OS platforms and recovering the assets is easy because you only require a single recovery pass phase.

Here are some screenshots of the wallet. Don’t send funds to these addresses.

Jaxx Bitcoin

Jaxx ZCash

Jaxx Ethereum

This is my first ever write up in the cryptocurrency world, feedback is always accepted, let me know what you think about this wallet, I would like to hear from you guys.

My experience with WordPress SQLite

A few months back I wrote a post on how to get WordPress running with SQLite which received a great response and turned out to be something people were looking for quite some time.

This means instead of using a production-grade SQL Database you can use a file that can be queried as a DB, making the footprint smaller.

I would like to kjmtsh to make this possible by creating the SQLite Integration plugin.

This website was previously powered by WordPress and SQLite which was great. It did not require configuring of a MySQL database, it ran very well on my VPS, which was a $5 Digital Ocean droplet that did a great job serving the site as it was relatively new and had only 60 to 100 views a day.

It was all going well until I started using plugins to get some extra functionality and 3rd party services that made the file size bigger and took longer to query, over time the website start receiving more visitors, now 200 to 300 which made it slow, when multiple visits took place serving them took even longer and even the admin interface was affected by them.

The “vacuum” option worked well to reduce the file size, the main problem was not of the file size. It was of unused columns and tables created by plugins that were never deleted or cleared after disabling it and removing it from the system.

WordPress does not remove an unused information and does not give you an option by default.

I found that using cache plugins would be a temporary fix, which did fine, but then it started making the admin part sluggish and things, like drafting and editing previously published content, was a nightmare while the site still grew and had, even more, larger amounts of viewership and made it super slow a request would take 30 seconds to load.

I spent some good time doing research on troubleshooting this problem and it turned out that the SQLite setup is great for websites that have smaller viewership and those that don’t update very frequently.

This website has over 172 posts which no longer made it small and continues to grow.

The reason I was using SQLite was that MySQL would be too heavy on a single core 512 MB VPS and I did not expect to have a large amount of audience.

Right now this website runs Ghost which is a nice blogging platform, back to WordPress and is powered by MySQL Database and I have upgraded to a $10 VPS which offers 1GB of RAM.

WordPress was never designed for SQLite but can be a good alternative for those who don’t prefer or don’t have the resource to host a MySQL Database.

This would be the perfect solution for devices like Raspberry Pi.

It would fail if there is more than one editor working on the same or different content as SQLite does not do Async quite well and will be slow as hell.

Hope you enjoyed the read, leave a comment below to let me know what you think or if you are planning to go with SQLite.

Remove all installed Homebrew packages

Keeping your Mac clean can be a difficult task if you use Homebrew to install packages for development and testing purpose and without correctly installing the right packages, it may bloat your system, taking up unwanted storage space.

This one command will uninstall all the packages install through Homebrew.

brew remove --force $(brew list) --ignore-dependencies  

To remove unused archives run.

brew cleanup  

If you have any question, leave a comment.

Bots attacking my Google Analytics user id

I looked into my Google Analytics dashboard and found it was still tracking visitors even when my website does not have the JavaScript tracker.

It turned out to be a bot that was attacking my Google Analytics user id and created fake visits to pages that do not exist on my site and get referred from sites that are not related to my site and subject matter I have written about.

Multitasking leads to failure

In this age and time, we are surrounded by technology that was once a man’s dream and only part of Sci-Fi movies, now being part of our daily lives. Today our smartphone has a higher rate of processing power than all the computational power NASA had when man landed on the moon. Today we have this humongous amount of power at our fingertips and has given us some of the greatest advantages and has helped humanity to a very large extent.

Looking at it the other way is also interesting as it has done the same in the opposite. Today we have devices that can multitask and do a lot of things, handle messages while taking a high-resolution photo, download documents while watching a video, stream a live event or complete desktop. This has made us aware of the opinions of our near and far relatives, family, and friends that did not matter to us and we are seeing content that does not matter and have an information overload.

It has also resulted in a large number of people feeling that they have done work at the present and made them feel that they have not achieved much by the end of the day. This has resulted due to the access to information that one wants to learn about, other than when it would not be helping to accomplish the specific task he or she is doing at present.

An average person checks his smartphone for updates every once in twenty minutes and not being mentally present at that moment.

This also makes us less dedicated to our work and eats away our time.

Humans were never meant to multi-task, they are only good at doing that one thing at a time and that one thing was done very well.

Every day we decided that we should accomplish so much and fail. Due to this we go miles away from our goal and feel the time was not spent well.

We are not always aware that this is happening to us, you might be working on a document in your editor app and suddenly remember a video your friend was talking about the other day and instantly decided to watch that video, instead of finishing the document, just because the mind subconsciously finds tasks that have not been completed easier than the ones you are working on.

Later you realise that editing the document would take less than thirty minutes, but you ended up working on it for more than two hours, just because you sat there and decided let’s watch all the videos youtube recommended because they are only five minutes long, instead watched a whole playlist when you were supposed to go back and edit the document.

A study has shown that switching between multiple tasks instead of completing one, makes you feel mentally rewarded because your mind wants to work on easier takes and is fooled to believe, that it is easier to do other tasks than doing the current task, resulting in one checking his smartphone over and over again or opening the empty fridge doors to see if the tooth fairy has left some chocolates.

The only way one could fix this problem is by scheduling one’s workflow instead of scheduling one’s work itself. This will make us work on what we should be doing at present instead of what needs to be done. Resulting in doing what needs to be done at the moment and also by the end of the day you feel you have accomplished more, completed tasks, thereby browsing your smartphone or any other device more relaxed, because your mind is not prompting you into thoughts of what you should be doing and how much is left to do.

Facebook’s Keyframe

Image source: code.facebook.com

Facebook recently open sourced its Keyframe library that powers it’s reaction emoji, that you can find to the left of the like button.

Keyframe lets Adobe After Effects shape into animation data that can run on iOS and Android device.

This is more of an overview if you want to view further details visit code.facebook.com

You can view or contribute to the project by visiting it’s GitHub

Source

Keyboard shortcuts every Maya user must know

Maya is one of the most popular Animation software widely used by studios and hobbyist. It has advanced features and tools sets that other software currently doesn’t offer making it the most preferred software in a pipeline.

Having many tools to work with can be sometimes very painful especially when you have to go through many menus resulting in a slower workflow.

Continue reading

New improvements in Blender 27.8

Blender foundation recently announced 27.8 RC release which has some new features and major improvements over the last release, the release notes are still in progress with refinement for final builds so far it has been great until then let’s have a look at that these changes.

Grease Pencil improved

Grease pencil has come a long way now and has a lot of improvements in this version, it now offers various kinds of brushes as seen in the below

Continue reading

How to install Blender on Mac

Blender is a  free production ready open-source 2D/3D Animation software package that is used by studios and individuals to create stunning artwork, animated films, models, packshots, VFX, 3D printing and video game development.

It is also one of the few software that offers GPU rendering (Cycles Renderer). Currently, it only supports CUDA GPU and has initial support for AMD so the render time may vary depending on the Graphics Card itself or the software may only support CPU rendering depending on the hardware support.

Continue reading

Content browser in Maya 2017

Autodesk recently announced Maya 2017 that has a lot of new features and large numbers of improvements and several bug fixes and the biggest changes is also switching to Arnold renderer which is set to be the default render.

It is also set to be faster and more efficient than Mental ray

Mental ray can still be installed as a 3d party plugin, if not may cause issues with files saved in older builds of Maya and may even crash it.

While some may like the new renderer for others it is an extra plugin they need to install to get their favorite render back.

An article will be published sooner that will reveal more details of the change in Maya so stay tuned.

Keep rendering awesome stuff,

Thank you for reading.

Arnold renderer in Maya 2017

Autodesk recently announced Maya 2017 that has a lot of new features and large numbers of improvements and several bug fixes and the biggest changes is also switching to Arnold renderer which is set to be the default render.

It is also set to be faster and more efficient than Mental ray

Mental ray can still be installed as a 3d party plugin, if not may cause issues with files saved in older builds of Maya and may even crash it.

While some may like the new renderer for others it is an extra plugin they need to install to get their favorite render back.

An article will be published sooner that will reveal more details of the change in Maya so stay tuned.

Keep rendering awesome stuff,

Thank you for reading.

Add Contact form to your Ghost blog

Ghost does not ship with the functionality of a form so we have to choose a third party service and there is one Formspree. It lets you add forms to a static site which is great since all we need to do is embed an HTML5 form that does not require any coding skill. All you need to do is replace your email address in place of

 <form id="contactform" action="//formspree.io/me@example.com" method="POST"> 
 <input type="text" name="name" placeholder="Your name">
 <input type="email" name="_replyto" placeholder="Your email">
 <textarea name="message" placeholder="Your message"></textarea>
 <input type="submit" value="Send">
 </form>

Once you add this form submit it and it will ask you to verify the setup by sending you mail on that address once verified your form is ready. Thank you for reading, For more posts stay tuned.

Downgrading Ghost

You would have recently noticed the downtime on my website, well it was not a server maintenance day it was me being fed up of the numbers of bugs and other issues I was facing with Ghost version 0.9.0.

In the recent months, I switched to Ghost and the reason was due to the way its interface was and ease of use. Completely designed for content development. It helped me be more productive because I could focus on writing and all I wanted was that. I preferred it because my website has a minimal designed and is oriented towards readers for other types of websites a fully blown CMS is a much better option like WordPress.

I was enjoying every moment of writing in Ghost until 0.6.4. I was aware that it lacked a lot of features that other platforms offered but the fact that I could preview Markdown side by side and have almost complete screen occupied for content writing made me stick with it. It was all perfect until Ghost decided to redesign the admin interface in version 0.7.0 which looked nice. I instantly upgrade to it and started working with it, at first it was nice until I started feeling that the sidebar occupies a lot of space and when set to hide would pop up every time my cursor moved close to the focus area, I could avoid that so I did not care about it and continue to upgrade it with the latest release.

Over time I started discovering that sometimes I typed to search for a blog post with the right post name it would never appear in the search result and at times open the post I was editing earlier that was not related to the search. What I expect was it being fixed in a newer release which never happened it started getting worse.

The admin panel already started loading slower and the login process would take a longer time instead of thing speeding or getting stable newer feature were included in beta that was not at all functional which was most disappointing.

Normally a software or app is supposed to get better more stable in a newer release instead this got slower and start to have more bugs and non-functional beta features.

All the beta features are supposed to be functional and not to be part of the app as a dummy.

I got annoyed and switch to version 0.6.4 which is faster and feels nicer.

Thank you for reading,

for more posts stay tuned.

When to use a static site?

Static sites are great they don’t require a Database or pre-processors and they don’t require any complex server sided setup to get working. All you need is a web server that is set to serve a static folder. In the earlier days of the Internet, everything was static and had to be written because pre-processors and server side scripts were not a thing. At that time almost every web page had to be hand coded which was time-consuming. These days we have pre-processors and server side scripts which let us create websites, web applications with ease and has also reduced the amount of code we have to write this also has resulted in a lot of developers adopting DRY(Don’t Repeat Yourself). The static site still exists and are in use, but the real questions are Are they an overkill? When to use one? and How often is the content being updated matters Is it difficult to create one? The benefit over another type? Hosting?

Are static sites an overkill

Yes, if it is being used in a setup where a Database drove the website is doing fine and does not turn into a resource hog on a web server.

When to use one? and How often is the content being updated matters

If yes, then you how often? if the site is going to be updated every date or once in a week, it makes total sense to go static especially if you are a single user/admin. If the site is updated hourly or after certain minutes then it can remain static only till the updates are client sided why? because after a certain number of pages (150-1000) static site generators and website builds get slow and can be much slower if pre-processing of stylesheets and scripts are done. I would highly recommend going for a Database driven CMS if your website has multiple authors are multiple content publishing in such a senior a static site build would be more of a hassle and at some point turn unmaintainable as each author might end up having a different version of the same or different content.

Is it difficult to create one?

No, these days we have static site generators like Jekyll and many other popular static site generators that are well documented so a less knowledge person can also create a simple and fast website. Most of the static site generators like Jekyll ship with markup stylesheet and scripts pre-processors like Markdown, Sass, CoffeeScript that give you more control over your output. It is also lighter because it is a pre-rendered page which can be great for a high traffic website that does not update often especially when running on a low spec server

Benefits?

It is faster than a Database driven website because it does not require any server sided pre-processing or querying or doing any kind of fetching. It can be built on any system supported by the specific static site generator. It can be previewed locally much easily before being published to production. It is more like a filesystem, so it can be version controlled easily with the help of Git or Mercurial or any other DVCS tool.

Hosting?

All you need is the output of the static site and place it into the webroot even the most minimal web server can serve the website. GitHub offers GitHub Pages service that lets you host your websites on *.github.io domain, which is great if you want to have an open source website. It also offers Jekyll integration. Which makes it cheaper to host, which is great if you don’t want to spend a lot of money on. I would recommend that you do good research over static site generators and builders so you can find the right tool for your website. The other great benefit is Thank you for reading, Like always stay tuned for more content.

How to reduce Chrome resource usage

Chrome is one of the most common web browsers used by a large audience. It is easy to use but at the same time a resource hog. The web standards and bleeding edge supports makes Chrome a developer friendly browser at the same time a resource hog. While a simple fix is disabling extensions that are not being used the other solution is to get The Great Suspender which will suspend the tabs that have not been viewed or switched to for a certain amount of time that can be set by the user

The extension offers some easy to setup customization that can help tune the plugin to your taste it has reduced my Chrome usage from 4.85 GB to 1.65 GB which is great because I have many tabs open but focus for a very long time on one tab while watching Youtube videos or reading articles. Thank you for reading.

Markdown everything

After learning Markdown markup I am having a hard time writing posts in TinyMCE editor.

Markdown has changed my whole writing process. Now I only write HTML when I have to write HTML 5 template and it is only when I don’t have Jade compiler available.

Create bootable Mac OS Sierra installer flash drive

Apple recently announced Mac OS Sierra at WWDC which is currently available as a Developer Preview and soon will be available as Public Beta.

While most of you may prefer installing it directly by upgrading their current EL Capitan setup others may want to clean install or create a bootable flash drive to install it on multiple Mac or have a USB installer instead.

You need to have a flash drive that is 8GB or above.

All you have to do is open Terminal and enter this command and change the with you flash drive name.

(Be careful while choosing your flash drive as this process will erase the data on that specific volume)

sudo /Applications/Install 10.12 Developer Preview.app/Contents/Resources/createinstallmedia –volume /Volumes/ –applicationpath /Applications/Install 10.12 Developer Preview.app  

You should see something like this in your Terminal.

Ready to start.  
To continue we need to erase the disk at /Volumes/Flash Drive.  
If you wish to continue type (Y) then press return: Y  
Erasing Disk: 0%… 10%… 20%… 30%…100%…  
Copying installer files to disk…  
Copy complete.  
Making disk bootable…  
Copying boot files…  
Copy complete.  
Done.  

You can now reboot your Mac and hold the option key to switch to the flash drive that contains the installer

Thank you for reading.

Difference between .ma and .mb file type in Maya

In Maya, there are two types of file format you can choose when saving a file .ma and .mb these file types are very different from each other and store content very differently and have different file sizes. But there is a bigger difference than the ones listed above and you might have questions that you want an answer for What is the difference? and Which is a better file format?

Continue reading

Why I switched to Muut for comments

Muut similar to Disqus is a discussion service that unlike Disqus with only allows commenting lets you to embed the discussion as an embed to your website so you can have a complete discussion section on your website.

The other great feature of Muut is it supports Markdown. Which means that if I can write posts in Markdown you can write comments in it too.

I switched back to Disqus because Muut free plan is very limited.

Python3 on Mac

Mac by default does not ship with Python 3 this means you have to install it manually.

There are many ways of installing it, the first way is to download the pre-build package from the official website and install it on your system.

The second way is by compiling it on your system.

While both the way listed above is the correct way to install Python3 updating to the latest version might a bit of a work because the above methods don’t update by themselves or with a help of any command line tools.

The third way is to install it with the help of Homebrew

Homebrew is a package for Mac that lets you install any package available in its repository with less than 4 commands depending on the install instructions.

If you have Homebrew already installed on your system you are good to go, if not you can install it from here

Python3 can be installed by entering this simple command in the terminal.

brew install python3  

Linking Python3 apps and utilities by entering this command

brew linkapps python3  

You can update to the latest version of Python 3 by entering this command

brew upgrade python3  

Now you are good to go with your Python3 based development environment on your Mac

Happy Coding!

Ruby on Mac

If all you want it to install the latest version of Ruby and don’t require different versions of it installed side by side you can use Homebrew to install it on your Mac.

All you have to do is enter this command

brew install ruby  

Your Ruby development is now only a few gems away.

Thank you for reading

The blogging pattern

You may have noticed recently there was not much content being updated on my website and no updates were available about when my posts would get published.

I was working on a new project and was a bit busy learning Unreal Engine, which is a very nice gaming engine, but a very new thing to me, as I had no experience in a 3D gaming engine.

At some points I was busy with things and had very less time to test thing out that are going to be a part of my posts, so I prefer not to publish the posts because unless I have an assurance that the guide or post would work without any errors/flaws a few times, I don’t let the draft turn live, so the readers don’t face any problems and always have an easy and clean setup with less or no errors.

I post a new article every Tuesday, Why? because I get most of my visitors on Tuesday and it would be really nice for them to have a newly published post to read, also visitor who has been visiting my sites know my publish date is Tuesday and has been the most preferred day, due to my earlier schedule.

How to render wireframe in Maya

 

Rendering wireframes can be a great way to showcase your model or modeling skills, especially when you have to display renders of your work in a showreel or your portfolio.

Maya ships with Maya Vector renderer, which lets you render vector and also offers option to render the wireframe, which means you can render wireframe of your model without having to use a third party plugin or external renderer.

How to do it.

Go to the render settings window, by clicking the last icon.

Switch to Maya Vector from the renderer drop down menu.

Now the render settings window should look like this.

Go to the Edge Options.

Enable Include edges.

Now click on render, now you will get a similar to the image below.

The final output should look like this.

Thank you for reading.

VBO in Blender

Working with heavy topology and large projects with too much geometry can slow down your viewport, which at times might be quite irritating, especially when you have too much geometry and have to fix minor issues. This can be avoided by enabling VBO (_V_ertex _B_uffer _O_bject) which is a nice feature in Blender. To make things simple VBO is a simple process in OpenGL that uploads vertex data (position, normal vector, color, etc.) to the graphics card instead of the system memory. This can have a big impact on performance by reducing lag, it can also let your hardware handle more geometry, because of a dedicated piece of hardware optimized for handling such tasks. Modern graphics cards are designed for handling the intense load (rendering, baking, etc). However, it is not recommended to have VBO enabled if you are using old graphics cards that have outdated drivers or those that are not supported by the vendor.

In Blender release 2.77 this feature is enabled by default and cannot be disabled (the option has been removed).

Here is a video to show how this helps.

Enabling VBO

Go to File > User Preferences

Head over to the System tab.

Blender User Preference

Tick the VBOs box, this will enable VBO.

VBO option zoom

VBO has been enabled now you would see an improvement in performance. Thank you for reading, If you have any questions feel free to leave a comment below

Blender 2.77 drops Windows XP support

After supporting Windows XP for a really long time, Blender foundation decided to drop the support for it in release 2.77. The reason for taking such a decision was the switch to Python 3.5 which comes with PEP 11 which does not support little-used the platform (includes Windows XP).

Microsoft had dropped support for Windows XP, before the release of Python 3.5, which ended up with Python 3.5 not supporting XP. While there are possibilities to backport Python 3.5 to XP, it does not seem like Blender offers support after backports are developed.

This means Blender 2.77 32 Bit release will require Vista or higher release of Windows. While this might be a bad news for XP users, you can still continue using 2.76 release, which is the last and latest version of Blender to support XP.

You can find the official details from this mailing list Thank you for reading, If you have any questions leave a comment below.

Difference between 2D and 3D Animation

The animation is divided into two major types, 2D and 3D, these types can also be called styles.

Appearance

You can know the difference between a 2D and a 3D Animation just by the appearance, here is an image of the same character in 2D(Left) and 3D(Right).

2D/ 3D Animation reference

The image is by a 3rd party I don’t hold any rights for it

How it’s made

2D

In 2D Animation the animator needs to have strong drawing skills because 2D Animation requires you to draw every frame which makes it really on drawing, these drawings have minor changes which, when arranged in sequence creates a motion picture due to (perception of vision). You only need to draw what is seen by the viewer (example: You don’t need to draw the eyeball if the eyes are closed) These days 2D Animation is created in software (Flash, After Effect, Toonboom Harmony), with the help of digital tablets (pen and tablet), some software these days don’t require you to draw every frame, you only need to draw the keyframes and the software creates in-between by itself, the Animation is tweaked with the help of a graph editor.

3D

In 3D Animation the animator works in a 3D workspace know as the viewport, where he controls the environment, component, and objects with 3 axes, X, Y, Z, these axes position property vary depending on the software preference and default settings. You can move your animated subject by using the controls provided after rigging, but things that are not in the view (captured by the camera) still exist, because of the 3D properties (example: The eyes are still present but not directly visible due to overlapping of a mesh structure). The Animation is tweaked with the help of the graph editor, which is available in every 3D Animation software.

If you plan to learn Animation as a course and choose to make it your life, I would suggest you learn both, learning both will help you in understanding the two in a relative manner, if you are planning to learn it has a hobby you can do the same or try out 2D and 3D and choose which you are comfortable with it. Stay tuned to read more about Animation related stuff, you can subscribe via email to receive emails over new articles every week. Thank you for reading.

Live Tile on Ghost

Microsoft Edge and Internet Explorer 11 support live tile in Windows 8/8.1 and 10. This means any user/visitor who pinned your website or web application to their Start can receive notification on an update like publishing new posts and pages. It is driven with the help of RSS feed, the polling or updating of the tile happens every 30 minutes, you can get this working with your Ghost blog by simply adding this line in your themes head tag which in most themes is located in default.hbs

After adding this meta restart Ghost, if everything goes right you should able to pin your Ghost blog to you start and see it display your latest post and update on the new post being published. Thank you for reading,

Animated movies 2016

This year is going to have a lot of Animated movies, here is a list of Animated movies that will be coming this year.

Kung Fu Panda 3

January 29th, 2016

Finding Dory

June 17th, 2016

The Angry Birds Movie

May 20th, 2016

Zootopia

March 4th, 2016

Norm of the North

January 15th, 2016 Thank for for reading, Leave a comment below, if you have any thoughts over these movies or any movie has not been listed.

GPU Compute with AMD for Cycles Render in Blender

Blender a free open source 3d animation software that is used for various 3d creations and ships with it’s Cycles Renderer which is a really powerful rendering engine which supports both CPU and GPU rendering, until Blender 2.75a was released, It only supported Nvidia graphics cards for GPU Compute that supports CUDA. In 2.75a that introduced most awaited feature, AMD Graphics card support, which meant now that you no longer needed to set Open CL test flags, you could just directly set you AMD Graphics Card from the Blender User Preference panel and set the project to GPU Compute and render with your Graphics card instead of using CPU on your AMD Graphics card powered system. This lets the user have more options over hardware and graphics card choices, so people can build better systems for rendering with a wider choice of hardware. Here is the list of AMD GPU supported by Cycles Render

Enabling GPU Compute in Blender

You need to have the latest graphics card driver for the card installed in your system and Blender 2.75a or higher. You can download the latest copy of Blender from here Go to File > Blender User Preference

blender-user-preferences-tabs

Go to System tab

Blender File User Preference

If your Graphics card is supported you should find an OpenCL tab near None(set as CPU)

blender-user-preferences-render-default-cpu-option

Choose your Graphics card, you will get a list of the Graphics cards, if you have one or more it will list those too, choose them in different combination to see what works best for you.

blender-user-preferences-devices-option

Now save your user settings(that will be found in the left bottom side of the window). You will get a GPU Compute option in your render panel once you set OpenCL to use you Graphics card. The drop down should look something like this.

blender-render-device-options

You can now render your project with your AMD Graphics card in Blender. Thank you for rendering(reading), If you have any question leave a comment below.

AMD support in Blender

Cycles render in Blender for a very long time has been supporting GPU compile, GPU Compute option is like a switch it lets you switch to your dedicated Graphics Card if your system has one for rendering. GPU Compile can be faster than CPU rendering, depending on your system configuration, this means it would take lesser time to render and the rendering task is done by a separate hardware optimised for rendering, this can also help reduce the viewport lag, because the CPU is not in use and can let you do minor changes faster in real time.

GPU Compute has been my long time favorite and was always set to default in my projects while setting up rendering, which at that time only supported Nvidia Graphics Cards that supported CUDA. Until I had switched to a MacBook Pro 15 2015 (with dedicated graphics), which came with an AMD Radeon R9 M370X. To enable GPU compile for the AMD graphics card on the mac you needed to use OpenCL test flags, until release 27.5.

You can learn how to enable GPU compute . Blender 2.76a came with support for AMD Graphics card, that used OpenCL and supported the following Graphics cards, this meant, I could finally use GPU Compile and render my projects with the Graphics Card, this was not very stable and had issues like loading of the rendering Kernel and other strange bugs, until Blender 2.76b which improved the stability by a larger number and now would render most of my scenes with crashing. It still needs a lot of improvement until it could be used in production on a large scale, but it is nice to see support for AMD Graphics card, this lets users have a wider choice of Graphics card option and support.

I still use my Alienware M17 for rendering, which comes with Nvidia Graphics card (Nvidia GTX 580M).

You could also go with a preview or viewport render, if you want quick result or if you GPU is not very fast.

Thank you for rendering, If you have any question feel free to leave a comment below.

Gogs on CentOS

Gogs also known as Go Git Server is an open source cross-platform self-hosted Git server written in Golang, similar to the GitLab which is written in Ruby. It is easy to install and configure and offers a lot of customization options while installing, it lets you choose between MySQL, SQLite, and PostgreSQL as a Database backend. It is also one of the lightest and fastest self-hosted Git server solutions, it does not offer a lot of features like GitLab, but whatever it offers, it does it without pain. If you don’t already have a CentOS server, you can get a VPS on Digital Ocean, by signing up with this referral you get $10 credit and I get $25 credit. Installing Gogs is easy it’s available as a pre-compiled package from Packager. It receives updates like every other package you would install on your system. So you can do the setup once and receive an update on new builds. This setup being pre-build, might not support SQLite, for this guide, I am using MariaDB. You can use PostgreSQL over MariaDB if you prefer it.

Getting started.

Adding Gogs repository key.

sudo rpm --import https://rpm.packager.io/key  

Add Gogs packager.io repository to your local packages database.

echo "[gogs]  
name=Repository for pkgr/gogs application.  
baseurl=https://rpm.packager.io/gh/pkgr/gogs/centos7/pkgr  
enabled=1" | sudo tee /etc/yum.repos.d/gogs.repo  

Installing Gogs

sudo yum install gogs

Installing MySQL

sudo yum install mysql-server -y  

Setting up MySQL

sudo mysql_secure_installation  

Logging into MySQL console.

mysql -u root -p  

Creating a new Database for Gogs.

CREATE DATABASE gogs;  

Exiting MySQL console.

exit  

Installing NGINX to reverse proxy Gogs to port 80.

sudo rpm -Uhv http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm  


sudo yum install -y nginx  

Editing the NGINX config /etc/nginx/conf.d/default.conf

sudo nano /etc/nginx/conf.d/default.conf  

Add the following lines to your NGINX config.

server {  
listen          80;  
  server_name     ${HOSTNAME};
  location / {
    proxy_pass      http://localhost:3000;
  }
}

Restart NGINX.

sudo service nginx restart  

You can access the newly setup Gogs Server at http://127.0.0.1/, and further configure it to your preference and create your new user, you can now store your projects on your private Git server. Have a question, leave a comment below.

Gogs on Debian

Gogs also known as Go Git Server is an open source cross-platform self-hosted Git server written in Golang, similar to the GitLab which is written in Ruby. It is easy to install and configure and offers a lot of customization options while installing, it lets you choose between MySQL, SQLite, and PostgreSQL as a Database backend. It is also one of the lightest and fastest self-hosted Git server solution, it does not offer a lot of features like GitLab, but whatever it offers, it does it without pain. If you don’t already have a Debian server, you can get a VPS on Digital Ocean, by signing up with this referral you get $10 credit and I get $25 credit. Installing Gogs is easy it’s available as a pre-compiled package from Packager. It receives updates like every other package you would install on your system. So you can do the setup once and receive an update on new builds. This setup being pre-build, might not support SQLite, for this guide, I am using MariaDB. You can use PostgreSQL over MariaDB if you prefer it.

Getting started.

Adding Gogs repository key.

wget -qO - https://deb.packager.io/key | sudo apt-key add -  

Add Gogs packager.io repository to your sources.list.d directory/ For Jessie

sudo echo "deb https://deb.packager.io/gh/pkgr/gogs jessie pkgr" | sudo tee /etc/apt/sources.list.d/gogs.list  

For Wheezy

sudo echo "deb https://deb.packager.io/gh/pkgr/gogs wheezy pkgr" | sudo tee /etc/apt/sources.list.d/gogs.list  

Updating local package database to fetch meta of the new repository.

sudo apt-get update  

Installing Gogs

sudo apt-get install gogs  

Adding MariaDB repository, choose the setup depending on the version of Debian you are using and the closest mirror you prefer, from here.

sudo apt-get update  

Logging into MariaDB console.

mysql -u root -p  

Creating a new Database for Gogs.

CREATE DATABASE gogs;  

Exiting MySQL console.

exit  

Installing NGINX to reverse proxy Gogs to port 80.

sudo apt-get install nginx-full  

Editing the NGINX config /etc/nginx/sites-enabled/default

sudo nano /etc/nginx/sites-enabled/default  

Add the following lines to your NGINX config.

server {  
  listen          80;
  server_name     yourhostname;

  location / {
    proxy_pass      http://127.0.0.1:3000;
  }
}

Restart NGINX.

sudo service nginx restart  

You can access the newly setup Gogs Server at http://127.0.0.1/, and further configure it to your preference and create your new user, you can now store your projects on your private Git server. Have a question, leave a comment below.

WordPress app for Mac/Windows/Linux

WordPress recently announced their clients for Mac, Windows and Linux, this app gives you access to the WordPress dashboard as a native client to the supported platform you are using.

So now you can access all your self-hosted WordPress(requires Jetpack to be enabled) or your hosted WordPress blogs, with the beautifully designed WordPress dashboard.

You can get a copy of the client for Mac, Windows or Linux by simply visiting here, and downloading the client for your system.

Thank you for reading,

If you have any questions feel free to leave a comment below.

PHP 7 on Ubuntu

It has been quite some time since PHP 7 was released, it has major bug fixes, improved and new ways of writing syntax and many other changes you can find here.

So I decided to write a guide on how to upgrade or install and setup PHP 7.x.x on your Ubuntu based system.

This is a major update, this can break your website or web application, please test your website or web application in a development environment before applying this to your production environment.

Installing

Adding PHP 7 ppa to the local database.

sudo add-apt-repository ppa:ondrej/php-7.0  

Updating the local database.

sudo apt-get update  

Installing PHP 7.

sudo apt-get install php7.0 php7.0-cli php7.0-common php7.0-curl php7.0-fpm php7.0-gd php7.0-json php7.0-mcrypt php7.0-mysql php7.0-opcache php7.0-sqlite3  

Now you can run PHP -v in the terminal to check if it has been installed successfully.

Upgrading from PHP 5 to PHP 7.

This will purge PHP 5 and remove unwanted dependencies and then clean up the temporary files

sudo apt-get purge php5* && sudo apt-get --purge autoremove && apt-get clean && sudo apt-get install php7.0 php7.0-cli php7.0-common php7.0-curl php7.0-fpm php7.0-gd php7.0-json php7.0-mcrypt php7.0-mysql php7.0-opcache php7.0-sqlite3  

Switching NGINX to PHP 7 from PHP 5

Edit the NGINX config to work with PHP 7 FPM, go the specific server config and change the following lines.

From this

location ~ .php$ {  
            try_files $uri =404;
            fastcgi_split_path_info ^(.+.php)(/.+)$;
            fastcgi_pass unix:/var/run/php5-fpm.sock;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            include        fastcgi_params;
        }

To this

location ~ .php$ {  
        try_files $uri =404;
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }

The only line changed is the fastcgi_pass.

From this

fastcgi_pass unix:/var/run/php5-fpm.sock;  

To this

fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;  

Now all the PHP config files are moved to /etc/php/7.x from /etc/php5/.

So the path to the php.ini is now located in /etc/php/7.x/fpm/php.ini, this means any edits done to the older PHP 5 php.ini need to be re-applied because this is a fresh new in file.

You can restart PHP 7 FPM by entering this command.

sudo service php7.0-fpm restart  

Thank you for reading,

If you have any question leave a comment below.

When and why you should use a CDN

CDN also known as Content Delivery Network is a way to distribute web content and assets to the user depending on this geographical location, this means that the closest server to your location will serve you the content. This makes a load up time faster and reduces the load amount of active connection on the main server due to faster delivery. Here are some CDN services available Cloudflare, MaxCDN, Microsoft Azure

How CDN works

Why use CDN?

CDN deliver content faster to the user, this reduces the load up time of individual files and assets, this is achieved by having multiple copies of data distributed to servers from various locations.

Here is a simple example:

If you live in Asia and the website you are visiting is has a server located in Europe, the amount of time it would take to load up a page on the network would be longer, because of the distance between the two locations. This is not noticed when serving small files, but if you are going to download a file as big as 1GB or larger, you may notice a bit of delay before the downloader connects to the server to request the file and will have a network delay while fetching the content, this is still fine if the file you have requested is not requested by other users, but if there are a large number of people downloading the same file from the same server, it might take a longer time, because the individual server has its own limit to handle requests.

So now lets see how a CDN can help out.

A CDN will copy the same files to multiple servers located in different locations like Asia, Europe, America and depending on the user’s location, it will serve the file, So if you live in Asia, the server located in Asia will serve you the file by detecting the location, while accepting the request made by you. This reduces the delay because the distance in the network is shorter and there are multiple servers to server content based on location, so now if a visitor from Europe downloads the same file it will retrieve the file from the server located in Europe and serve it to the user. This lets websites like Google, YouTube and many other websites serve content to a larger audience from various location with a shorter delay.

But, when do I need a CDN?

When your website is large and has to serve content to a large audience of various location is a short amount of time. So if you have a blog that gets a good number of views and has graphical right content and the server can handle it without a pain and if there is no noticeable delay in content served to the user, you really don’t need a CDN. Only if you have a lot of traffic, which might delay the web content served to a large number visitors, or if you have a sudden rush of traffic to every new blog post, that can not be handled instantly by the server instantly, that is when you might need a CDN.

Everything at once

CDN are good for serving content faster to a visitor depending on their location, this is good for websites that have a large number of visitors and can have large amounts of delay, which might make them lose visitors and can crash the servers, It does not make any sense to have a CDN for a website that gets a decent amount of traffic that can be handled by the server without any pain and does not have large amount data to be served for every request

Thank you for reading,

If you have any questions feel free to leave a comment below.

The image used in the post is by Wikipedia

Good Bootstrap practices

Bootstrap is a CSS(Cascading Style Sheets) library developed by Twitter that lets you create responsive websites with easy to understand naming convention and well-designed grid system.

It has gained popularity due to its easy of use and pre-processor support, here are some good practices that will make your Bootstrap powered environment clean and easy to manage and maintain.

Use PreProcessors

Bootstrap offers Less based builds, these builds let you choose what modules you require and are needed to be enabled and helps you override the style sheets with writing lesser lines of the stylesheet.

Things like colors of text and background color of elements classes like navbar can be changed by changing the information stored in the variables.

Use of PreProcessors can make your Bootstrap lighter and have a cleaner code base, that is not bloated by the defaults.

Use scripts depending on the structure and modules

What I mean to say is if you website does not contain large amount of Bootstrap, then don’t use the bootstrap.min.js instead use the individual module files in development and later minify them to one file in production, so if you have a bootstrap based drop-down menu, you don’t have to use the complete bootstrap javascript library and only use the dropdown.js with its helper files.

This will make your website lighter and less bloated with things that are not going to be used and the client has a smaller file to render.

These are all the major steps I take while developing a website with Bootstrap.

Thank you for reading,

If you have any question feel free to leave a comment below.

Typora Markdown editor

Typora is a free cross-platform Markdown editor for Mac/Windows/Linux), it currently in beta stage and under active development. You might be thinking not another Markdown editor, with keyboard shortcuts and side panel preview and custom CSS options. Typora is a WYSIWYG (WhatYouSeeIsWhatYouGet) editor, this means you would not have a separate preview panel to the side for preview, this is a great way for writing content, because your source editor is your preview editor, that updates as you are typing, so you have more screen space for writing and focus while typing will be on the same window and not a separate panel. As the video below

It is one of the best Markdown editors I have ever used till date.

It has many other features similar to the other Markdown editors like Mou

  • Custom theme
  • Font customization
  • Window style(how the app should appear)
  • Word count

It is a great editor for a writer who wants to edit and view the preview in the same place and wants to have a larger amount of screen space used for editing area. Thank you for reading, if you have any question feel free to leave a comment below.

WhatsApp on Mac

WhatsApp had recently made their online web service available for user to connect their smartphone using QR Code, this let the user access WhatsApp with a Web UI, this could be done by scanning the QR Code with your smartphone camera

This service turned very popular, the problem with this was you need to have a web browser tab open or sometimes a separate new window for the Web Interface.

So, I discovered this app while browsing GitHub for interesting projects, I discovered ChitChat an open source, app available for Mac.

I tried it out and found it interesting and was the right solution for this problem.

You can install it by downloading it from here

This application lets you have WhatsApp Web as a Mac app and integrates well with the Notification Center and lets you access some options from the Apple menu bar.

The app is under active development, so there might be a few bugs, and certain features might not be currently completely functional like the media upload option

Thank you for reading. If you have any questions leave a comment below.

Linux Kernel 4.3 on Ubuntu

Linux Kernel 4.3 ships with 20.6 million lines of code with support for Skylake lines of CPU by Intel. So Skylake based hardware can run efficiently with good stability.

The performance improvement is noticeable on some hardware’s running this build of Kernel on Ubuntu based Linux distributions (I have not worked on this personally these are according to user reports).

This is a stable release, Linux Kernel 4.4 will be the LTS (Long Term Service) build currently in RC-2.

The steps below can be used on Ubuntu and Ubuntu-based Linux distributions to upgrade the Kernel.

Installing

It’s really easy to install it on Ubuntu or Ubuntu based Linux distributions. But first, you need to know which build you’re using 64 Bit or 32 Bit.

You can check this by using this command.

uname -r  

Download these packages (The install process is the same).

64 Bit. {#64bit}

wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.3-wily/linux-headers-4.3.0-040300-generic_4.3.0-040300.201511020949_amd64.deb  


wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.3-wily/linux-headers-4.3.0-040300_4.3.0-040300.201511020949_all.deb  


wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.3-wily/linux-image-4.3.0-040300-generic_4.3.0-040300.201511020949_amd64.deb  

32 Bit. {#32bit}

wget kernel.ubuntu.com/~kernel-ppa/mainline/v4.3-unstable/linux-headers-4.3.0-040300_4.3.0-040300.201511012034_all.deb  


wget kernel.ubuntu.com/~kernel-ppa/mainline/v4.3-unstable/linux-headers-4.3.0-040300-generic_4.3.0-040300.201511012034_i386.deb  


 wget kernel.ubuntu.com/~kernel-ppa/mainline/v4.3-unstable/linux-image-4.3.0-040300-generic_4.3.0-040300.201511012034_i386.deb

Installing the packages that we just downloaded.

sudo dpkg -i linux-*.deb  

Updating the bootloader.

sudo update-grub  

Now you can reboot the system to switch to the new Kernel you have just installed.

If you have issues this Kernel builds you can uninstall it with a few steps.

 sudo apt-get purge linux-headers-4.3* linux-image-4.3*

Removing dependencies permanently that are not required by the system.

sudo apt-get auto remove && sudo apt-get clean  

Thank you for reading, if you have any questions leave a comment below.

Front-end editor for WordPress

WordPress is a great CMS(Content Management System) to manage blogs, websites of various scales, over time it has gained popularity due to fulfilling demand of its users. It has gained features like autosave and many other features of the time.

The only thing it lacks at this point is a front-end editor, for the user. The user who prefers to use front-end instead of the backend, frontend editor has many advantages.

  • Preview of the how the page would actually look
  • Structure markup depending on the design.
  • Manage media(images, videos, etc) according to the design of the page
  • Easier to handle just for the user

Is it possible to get all of this right now? Yes it is, there is a plugin for that too…Its WA-Frontend a plugin currently in active development that lets you edit pages and post from the front end, it also allows you to manage settings and you can also access the autosave in the front end with version control, this lets you preview or revert to your earlier edited history, all you have to do is install this plugin and you are ready to go to the front-end to see it in action.

Here are a few screenshots:

Thank you for reading, for more WordPress content stay tuned.

Steam on Debian

Steam is a simple and easy to use digital distribution platform by Valve, most popular for game distribution, it supports Windows, OS X and Linux. To install Steam on Debian all we need to do is get the Steam Debian package and install it. Download the Steam.deb or use this command.

wget https://steamcdn-a.akamaihd.net/client/installer/steam.deb  

You can use GDebi to install this package or use this command

sudo dpkg -i steam_latest.deb  

Happy Gaming, if you have any questions leave a comment below.

Arrange files and folder by name in Mac

Mac by default does not sort files by Name, this might make it difficult to find files and folder if you come from other OS environments like Windows or Linux(depends on the file manager default sorting).

It is very easy to sort files and folders by name and set it as a default sorting type.

Open Finder, then press COMMAND+J, this will open a window.

The window will offer you to “Sort By:”, the option will provide you with a dropdown menu which contains “Name” and other various types of sorting types.

To set this as the default sorting order press the “Use as Defaults” button, this will turn your local folder setting to a global setting, global settings will be applied to Finder, this means all the files and folder will be now sorted by “Name”.

Thank you for reading, if you have any question leave a comment below.

Custom shelf in Maya

Maya by default comes with shelves that contain various tools for different divisions of works like Animating and Modeling.

If you prefer to have your own shelf with the tools you prefer, that are not already part of any of the shelves, you can add tools with a simple keyboard shortcut or by using MEL scripting.

Adding custom shelves lets you add tools that you prefer, in a place where you can access them faster, it saves time and helps in being more productive by reducing the number of times you have to get into the menu and navigate across the tool.

I personally recommend using custom shelves as it lets you have your own personal and preferred toolset, that does not disturb existing Maya workflow and interface setup.

You need to click on the gear icon to the left of the shelves panel.

Click on the Shelf Editor, this will give you the shelf editor window that lets you manage the shelves.

You can rename the shelf to whatever you like.

After typing the name you like all you have to do is hit Enter

Adding new tools to the newly created shelf is really easy, all you need to do is go to the menu you want to and press CTRL+SHIFT and click on the tool you want to add, your cursor will have a small plus icon, that will let you know that you can add this tool to the shelf.

It should look something like this.

The cursor does not appear in the screenshot because the native OS X screen capturing module does not capture the cursor.

Once you have followed these steps you should see the newly added tool in your newly created shelf.

Thank you for reading, if you have any questions leave a comment below.

Tricks to get good at things

Tricks to get good at things I spend most of my time, trying to learn new things and try to improve on things I already know, it might be a skill a technique, theory or practical task.

Its nice to know things, knowing a lot of things is also a good thing, but having the right knowledge, that can be applied correctly and can be used efficiently is very important.

If you know something, but you don’t know how to apply it correctly, it just means you don’t know enough of it, that does not mean that its end of the world, the belief, that you can’t get better is the end of you’re learning the world and lowering your self-esteem.

Sometimes, you might know something, but you find a struggle trying it practically, thinking of it theoretically is very easy, but doing it practically might not be as easy.

The answer to this problem is simple, you need to repeat it over and over till you feel it’s something you know and you can do it, doing it over and over will automatically fuel your self-confidence because now you are doing something you already know and you are already good at.

Let me explain how this worked for me

I had difficulty reading and I would get stuck on every second word of a sentence, people hearing me would get fed up and give up on listening to me, just because the sentences did not have that smooth flow, I was so bad at reading that even person stammering could read faster than me and was smoother than how would read.

I knew that it needs to be fixed, but I was not sure how.

So one day, I realised that I am interested in cars and the only way know more was by reading magazines that contained that latest information about cars and the automotive industry, so I finally got my first magazine, it was the AutoCar India 2005 November edition, I started reading it, I read it to myself and liked the magazine so much that, I read every issue of this magazine, I started having information about a lot of vehicles and how they were built, I could remember 60 pages of price list of cars and automotive parts.

I now had read over 24 magazines in a span of two years and now I had the knowledge, that I used to suggest people over what car they should buy and what would fit their needs.

I never realized this improvement in my reading skills, until my mother told me one day that I don’t struggle any more reading, I started laughing hearing that and finally felt that I did it.

I realized that the will for learning something I wanted to learn about helped me improve my skill of reading. The magazines were a map to the woods, but it also let me learn how to get into the woods.

The simple trick to get good at things is to simply start it and be persistent at learning it, keep going it over and over and at some point it is something you have done so many times, that it will make you feel, doing it this time will be no different, you will realise that.

This also gave you the self-confidence, will fuel your journey of learning something new.

Keep learning, never give up, believe in yourself and thank you for reading.

Synaptic on Ubuntu

Synaptic is a package manager with the features given below, it lets you manage your system by offering a safe and easy to use front-end (Graphical Interface).

Features

  • Install, remove, upgrade and downgrade single and multiple packages
  • System-wide upgrade
  • Package search utility
  • Manage package repositories
  • Find packages by name, description, and several other attributes
  • Select packages by status, section, name or a custom filter
  • Sort packages by name, status, size or version
  • Browse available online documentation related to a package
  • Download the latest changelog of a package
  • Lock packages to the current version
  • Force the installation of a specific package version
  • Undo/Redo of selections
  • Built-in terminal emulator for the package manager

It also has the following features for Debian and Ubuntu and Linux Mint only:

  • Configure packages through the debconf system
  • Xapian-based fast search
  • Get screenshots from screenshots.debian.net

Installing with software-center

You can install it by searching in the software-center for synaptic and clicking on Synaptic Package Manager

Installing with CLI

Or, alternatively, open a terminal, and enter:

sudo apt-get install synaptic  

To launch Synaptic, choose System > Administration > “Synaptic Package Manager” Or if you are using the Unity interface, open the dash and search for synaptic. Enjoy!!!

if you have any prob

Ghost Material(Ghost)

[Loading…](https://gumroad.com/l/ghost-material)

Ghost Material is a clean, simple responsive and snappy theme for Ghost blogging platform built with Google’s Material Design library.

It supports devices of all screen sizes and live tile update for Windows 8/8.1 and 10 for users who use Microsoft Edge and IE.

Works with any version of Ghost above 0.6.x

All of that just for $5

Any questions about this product are welcomed, just leave a comment below.

Transfer files using Rsync

I had been using FTP for a very long time for transferring, it is a great protocol for file transfer, but it is time-consuming when transferring big files or many files to a distant server, a server can timeout if the server is or the client network is slow.

I spend some time trying to discover alternatives. I finally discovered Rsync, a nice protocol to transfer files which were also supported by my Server/receiver where I wanted my files to be transferred.

So, I wrote this small script and thought I would share it.

rsync --compress --recursive --checksum upload_folder/ user@host.com:deploy_location/  

This is a simple and easy way to upload file, you can edit the flags according to your needs

Thank you for reading.