Monthly Archives: March 2018

Birth of RemotePanda — What made me invest my time in this idea

Birth of RemotePanda — What made me invest my time in this idea

Once upon a time, a newbie called Jon decided to start his own company. He had an innovative idea, some hard earned cash and blind faith in hiring market. He was so naive, when someone told him, platforms like Upwork, Freelancers is the easiest and the best way to find people who can help him develop his idea, he just fell for it. He didn’t know what to look, how to look, what to ask, whom to ask. Well, basically “he knew nothing”. All he “supposedly” knew was that these platforms would connect him to the right people. So, he decided to use their service by putting all his money and faith in the hands of some strangers. One doesn’t need to be an expert in the field to guess what would have happened next, right?

Now, let me tell you another story- the story of Robb.

Robb in this story was an experienced guy and yes, he knew everything. For his startup endeavor, he wanted to hire some talented guys. As it was a startup, he had limited resources at hand to spend so he decided to look for remote talent. Now, please note that he knew what to look, how to look, what to ask, whom to ask. He followed traditional remote hiring process and found a “great team” called Team Bolton. He decided to partner up with Bolton, and the journey started. Days passed by, everything was going well until they faced some technical challenge. Bolton’s are supposed to be expert to address such type of challenges so, Robb asked them to take care of it. But, they backed out and told him that it’s not part of the contract and can’t work on the issue within the current budget. Well, what could have gone wrong here!? I mean, for this type of challenges Robb decided to hire them in the first place, they were the experts, they knew Robb has a limited budget but still, they decided to accompany Robb in his journey and when the hard time came they stabbed him through the heart. Sad ending, isn’t it?

Well, Jon and Robb are not the only ones here. There are lots of Jons and Robbs who are out there looking for talented people but not able to find the right one.

 

What could be the reason?

Some may say it’s Service providers fault, some may say it’s employer’s fault and some may say it’s the marketplace that needs to improve their vetting process. In my opinion, if one has to play blame game then everyone is at fault here but if you really try to understand the real issue then you will realize that the traditional remote hiring process is the real culprit and we need to fix it.

 

Why not me?

Before we started building RemotePanda, I met more than 150 startup CEOs from 10 different cities such as SF, Boston, Houston etc. I discussed with them what I had in my mind, understood their point of view, collected their feedback and the conclusion I drew was simple, ‘They Demanded Quality’.

And that’s exactly what we are offering. What I realized from my interaction with all those startup founders is that if we tweak the traditional process a bit then maybe we can solve challenges like quality plus we can improve success rate multi-fold.

 

How are we doing it?

1.Have a captain in place:

Rather than just being someone who just introduces (in that case you could have just found people from Upwork/Linkedin) we remain part of the contract throughout the lifetime of the project and be invested in the project. We want to free employer/project owner from the headache of managing people.

2.Quality Watch Dogs:

We want to maintain quality, transparency and seamless experience for everyone. We make sure that expectations are set from each side from the beginning by quantifying the deliverable.

3.Friend, guide and helping hand in need:

We assign a dedicated personal assistant for employer/ project owner who will be their guide at every stage helping them with setting goals, timelines, etc. He/she is the one who basically makes things happen and acts as the bridge between stakeholders.

4.Learning and Education:

We invest part of fees that employer/ project owner pay to us into skill development of talent they hire.

5.Team visit:

We sponsor a visit between employer/ project owner and their remote team every six months.

6.Escrow Billing:

We support Escrow billing and release the money to the service provider only after quality work is delivered.

We are powered by Startup Grind Pune and all the freelancers, agencies, remote teams are associated with us are close network connections. In order to give personalized experience, we have kept our talent pool limited to Pune ecosystem.

We are currently in closed beta and testing our model with selected few employer/ project owners. We have made good progress so far and all the current users have positive feedback about RemotePanda service.

I’m not claiming that this model is gonna solve all the problems but at least address the 3 main challenges in remote work viz. expectations, communication, and feedback. I believe if we are able to implement things the right way, we’ll definitely able to fix the broken process to some extent. 

Let’s hope for the best. Fingers crossed.

GO with GORM

GO with GORM

go with gorm

 

In Chapter-9 of our Golang Tutorial, we touched upon ‘Go Database/SQL’. In this chapter, let’s explore ‘Go with GORM’. 
The GORM is fantastic ORM library for Golang, aims to be developer friendly. It is an ORM library for dealing with relational databases. This gorm library is developed on the top ofdatabase/sql package.
The overview and feature of ORM are:
  • Full-Featured ORM (almost)
  • Associations (Has One, Has Many, Belongs To, Many Too Many, Polymorphism)
  • Callbacks (Before/After Create/Save/Update/Delete/Find)
  • Preloading (eager loading)
  • Transactions
  • Composite Primary Key
  • SQL Builder
  • Logger
  • Developer Friendly
To install GORM just use the following command :
go  get “github.com/jinzhu/gorm”
In order to use, just import this package into your project along with the database drivers as you want

 

Go

1
2
3
4
5
import (
_ “github.com/go-sql-driver/mysql”
“github.com/jinzhu/gorm”
_ “github.com/jinzhu/gorm/dialects/mysql” //You could import dialect
)

 

Now use the gorm to do the operations on the database.
In order to connect to the database, just use the following syntax.
db, err := gorm.Open(“mysql”, “user:password@/dbname?charset=utf8&parseTime=True&loc=Local”)
NOTE: In order to handle time. Time, you need to use parseTime parameter
Here you have to manually create the database before you connect.
For PostgreSQL, db, err := gorm.Open(“postgres”, “user=gorm dbname=gorm sslmode=disable”)
And remember to close the database when it is not in use using defer defer db.Close()

 

Go

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
main.go
import (
log
“github.com/jinzhu/gorm”
_ “github.com/jinzhu/gorm/dialects/mysql”
)
func main() {
db, err := gorm.Open(“mysql”, “root:root@tcp(127.0.0.1:3306)/ormdemo?charset=utf8&parseTime=True”)
defer db.Close()
if err!=nil{
log.Println(Connection Failed to Open)
}
log.Println(Connection Established)
}

 

Creating Models and Tables
Define the models before creating tables and based on the model the table will be created.

 

Go

1
2
3
4
5
6
7
8
9
10
11
12
13
type User struct {
ID int
Username string
}
func main() {
// After db connection is created.
db.CreateTable(&User{})
// Also some useful functions
db.HasTable(&User{}) // =>;; true
db.DropTableIf Exists(&User{}) //Drops the table if already exists
}
Auto Migration
This Auto Migration feature will automatically migrate your schema. It will automatically create the table based on your model. We don’t need to create the table manually.

 

console

 

Now just go and check the “ormdemo” database, there you will find the table along with the columns.
Note: You need to create the database manually.
The gorm also has given model definition including fields like Id, CreatedAt, UpdatedAt, DeletedAt. If you want to use just embedgorm.Model in your model/struct.

 

Go

1
2
3
4
5
6
7
8
9
10
11
12
// Model’s definition given by gorm
type Model struct {
ID uint `gorm:“primary_key”`
CreatedAt time.Time
UpdatedAt time.Time
DeletedAt *time.Time
}
type User struct {
gorm.Model // fields `ID`, `CreatedAt`, `UpdatedAt`, `DeletedAt`will be added
Name string
}

 

In the gorm.Model the fields
CreatedAt – used to store records created time
UpdatedAt – used to store records updated time
DeletedAt  – used to store records deleted time, It won’t delete the records just set the value
of DeletedAt’s field to the current time and you won’t find the record when querying i.e. what we call soft deletion.
If you want to set some SQL parameters to the model fields then you can do like this

 

Go

1
2
3
4
5
type UserModel struct{
Id int `gorm:“primary_key”;“AUTO_INCREMENT”`
Name string `gorm:“size:255”`
Address string `gorm:type:varchar(100)`
}

 

Gorm is creating a table with the plural version of the model name like if your model name is UserModel then gorm is creating the tables in its plural version user_models. So in order to avoid this just do db.SingularTable(true).
In gorm, ID field is automatically set to a Primary key field with auto increment property.
CRUD Operations
The query for the SQL using gorm can be specified like this
Create/Insert
In order to create or insert a record, you need to use the Create() function. The save() is also there that will return the primary key of the inserted record.

 

 

Update
In order to update the records in the table using gorm, look into the below sample example.

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
user:=&UserModel{Name:“John”,Address:“New York”}
// Select, edit, and save
db.Find(&user)
user.Address = “Brisbane”
db.Save(&user)
// Update with column names, not attribute names
db.Model(&user).Update(“Name”, “Jack”)
db.Model(&user).Updates(
map[string]interface{}{
“Name”: “Amy”,
“Address”: “Boston”,
})
// UpdateColumn()
db.Model(&user).UpdateColumn(“Address”, “Phoenix”)
db.Model(&user).UpdateColumns(
map[string]interface{}{
“Name”: “Taylor”,
“Address”: “Houston”,
})
// Using Find()
db.Find(&user).Update(“Address”, “San Diego”)
// Batch Update
db.Table(“user_models”).Where(“address = ?”, “california”).Update(“name”, “Walker”)

 

Delete
In order to delete the record from the table, gorm has provided Delete() as given in below examples

 

Go

1
2
3
4
5
6
7
8
// Select records and delete it
db.Table(“user_models”).Where(“address= ?”, “San Diego”).Delete(&UserModel{})
//Find the record and delete it
db.Where(“address=?”, “Los Angeles”).Delete(&UserModel{})
// Select all records from a model and delete all
db.Model(&UserModel{}).Delete(&UserModel{})

 

Queries
In order to fetch the records from the database and do some SQL stuffs gorm has given some query functions. We’ll now do a quick discussion on it.

 

Go

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
// Get first record, order by primary key
db.First(&user)
// Get last record, order by primary key
db.Last(&user)
// Get all records
db.Find(&users)
// Get record with primary key (only works for integer primary key)
db.First(&user, 10)
Query with Where() [some SQL functions]
db.Where(“address = ?”, “Los Angeles”).First(&user)
//SELECT * FROM user_models WHERE address=’Los Angeles’ limit 1;
db.Where(“address = ?”, “Los Angeles”).Find(&user)
//SELECT * FROM user_models WHERE address=’Los Angeles’;
db.Where(“address <> ?”, “New York”).Find(&user)
//SELECT * FROM user_models WHERE address<>’Los Angeles’;
// IN
db.Where(“name in (?)”, []string{“John”, “Martin”}).Find(&user)
// LIKE
db.Where(“name LIKE ?”, “%ti%”).Find(&user)
// AND
db.Where(“name = ? AND address >= ?”, “Martin”, “Los Angeles”).Find(&user)

 

Now just go through the program.

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
main.go
package main
import (
_ “github.com/go-sql-driver/mysql”
“github.com/jinzhu/gorm”
_ “github.com/jinzhu/gorm/dialects/mysql”
“log”
)
type UserModel struct{
Id int `gorm:“primary_key”;“AUTO_INCREMENT”`
Name string `gorm:“size:255”`
Address string `gorm:“type:varchar(100)”`
}
func main(){
db, err := gorm.Open(“mysql”, “root:root@tcp(127.0.0.1:3306)/ormdemo?charset=utf8&parseTime=True”)
if err != nil {
log.Panic(err)
}
log.Println(“Connection Established”)
db.DropTableIfExists(&UserModel{})
db.AutoMigrate(&UserModel{})
user:=&UserModel{Name:“John”,Address:“New York”}
newUser:=&UserModel{Name:“Martin”,Address:“Los Angeles”}
//To insert or create the record.
//NOTE: we can insert multiple records too
db.Debug().Create(user)
//Also we can use save that will return primary key
db.Debug().Save(newUser)
//Update Record
db.Debug().Find(&user).Update(“address”, “California”)
//It will update John’s address to California
// Select, edit, and save
db.Debug().Find(&user)
user.Address = “Brisbane”
db.Debug().Save(&user)
// Update with column names, not attribute names
db.Debug().Model(&user).Update(“Name”, “Jack”)
db.Debug().Model(&user).Updates(
map[string]interface{}{
“Name”: “Amy”,
“Address”: “Boston”,
})
// UpdateColumn()
db.Debug().Model(&user).UpdateColumn(“Address”, “Phoenix”)
db.Debug().Model(&user).UpdateColumns(
map[string]interface{}{
“Name”: “Taylor”,
“Address”: “Houston”,
})
// Using Find()
db.Debug().Find(&user).Update(“Address”, “San Diego”)
// Batch Update
db.Debug().Table(“user_models”).Where(“address = ?”, “california”).Update(“name”, “Walker”)
// Select records and delete it
db.Debug().Table(“user_models”).Where(“address= ?”, “San Diego”).Delete(&UserModel{})
db.Debug().Where(“address = ?”, “Los Angeles”).First(&user)
log.Println(user)
db.Debug().Where(“address = ?”, “Los Angeles”).Find(&user)
log.Println(user)
db.Debug().Where(“address <> ?”, “New York”).Find(&user)
log.Println(user)
// IN
db.Debug().Where(“name in (?)”, []string{“John”, “Martin”}).Find(&user)
log.Println(user)
// LIKE
db.Debug().Where(“name LIKE ?”, “%ti%”).Find(&user)
log.Println(user)
// AND
db.Debug().Where(“name = ? AND address >= ?”, “Martin”, “Los Angeles”).Find(&user)
log.Println(user)
//Find the record and delete it
db.Where(“address=?”, “Los Angeles”).Delete(&UserModel{})
// Select all records from a model and delete all
db.Debug().Model(&UserModel{}).Delete(&UserModel{})
}
console

 

Transaction

 

Go

1
2
3
4
5
6
tx := db.Begin()
err := tx.Create(&user).Error
if err != nil {
tx.Rollback()
}
tx.Commit()

 

Associations
The relationship defines how structs or models interact with each other. So for this, you need to create/define what kind of relationship at both ends.
One To One Relationship
One to One Relationship specifies how the fields of one models are related to others by specifying one to one mapping. For now, I’ve considered and done one to one mapping between Place and Town struct/model. Here one Town belongs to one Place relational mapping I’ve created.

 

Go

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
Place.go
package model
import ()
type Place struct {
ID int `gorm:primary_key`
Name string
Town Town
TownId int `gorm:“ForeignKey:id”` //this foreignKey tag didn’t works
}
Town.go
package model
import ()
type Town struct {
ID int `gorm:“primary_key”`
Name string
}
main.go
package main
import (
_ “database/sql”
_ “github.com/go-sql-driver/mysql”
“github.com/jinzhu/gorm”
“26_GO_GORM/One2One_Relationship/model”
“fmt”
)
//var Db *gorm.Db
func main() {
//Init Db connection
Db, _ := gorm.Open(“mysql”, “root:root@tcp(127.0.0.1:3306)/testmapping?charset=utf8&parseTime=True”)
defer Db.Close()
Db.DropTableIfExists(&model.Place{}, &model.Town{})
Db.AutoMigrate(&model.Place{}, &model.Town{})
//We need to add foreign keys manually.
Db.Model(&model.Place{}).AddForeignKey(“town_id”, “towns(id)”, “CASCADE”, “CASCADE”)
t1 := model.Town{
Name: “Pune”,
}
t2 := model.Town{
Name: “Mumbai”,
}
t3 := model.Town{
Name: “Hyderabad”,
}
p1 := model.Place{
Name: “Katraj”,
Town: t1,
}
p2 := model.Place{
Name: “Thane”,
Town: t2,
}
p3 := model.Place{
Name: “Secundarabad”,
Town: t3,
}
Db.Save(&p1) //Saving one to one relationship
Db.Save(&p2)
Db.Save(&p3)
fmt.Println(“t1==>”, t1, “p1==>”, p1)
fmt.Println(“t2==>”, t2, “p2s==>”, p2)
fmt.Println(“t2==>”, t3, “p2s==>”, p3)
//Delete
Db.Where(“name=?”, “Hyderabad”).Delete(&model.Town{})
//Update
Db.Model(&model.Place{}).Where(“id=?”, 1).Update(“name”, “Shivaji Nagar”)
//Select
places := model.Place{}
towns := model.Town{}
fmt.Println(“Before Association”, places)
Db.Where(“name=?”, “Shivaji Nagar”).Find(&places)
fmt.Println(“After Association”, places)
err := Db.Model(&places).Association(“town”).Find(&places.Town).Error
fmt.Println(“After Association”, towns, places)
fmt.Println(“After Association”, towns, places, err)
defer Db.Close()
}
console

 

Note: Here in the example, you need to create the foreign keys manually using AddForeignKey() function because auto-migration of the foreign key is not happening.
One To Many Relationship
One to Many relationships, models of two classes are related by specifying one to many mapping. Here in the example, I’ve created the mapping like one customer has many contacts.
console

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
main.go
package main
import (
“fmt”
_ “github.com/go-sql-driver/mysql”
“github.com/jinzhu/gorm”
)
type Customer struct {
CustomerID int `gorm:“primary_key”`
CustomerName string
Contacts []Contact `gorm:“ForeignKey:CustId”` //you need to do like this
}
type Contact struct {
ContactID int `gorm:“primary_key”`
CountryCode int
MobileNo uint
CustId int
}
func main() {
db, err := gorm.Open(“mysql”, “root:root@tcp(127.0.0.1:3306)/testmapping?charset=utf8&parseTime=True”)
if err != nil {
panic(err.Error())
}
defer db.Close()
db.DropTableIfExists(&Contact{}, &Customer{})
db.AutoMigrate(&Customer{}, &Contact{})
db.Model(&Contact{}).AddForeignKey(“cust_id”, “customers(customer_id)”, “CASCADE”, “CASCADE”) // Foreign key need to define manually
Custs1 := Customer{CustomerName: “John”, Contacts: []Contact{
{CountryCode: 91, MobileNo: 956112},
{CountryCode: 91, MobileNo: 997555}}}
Custs2 := Customer{CustomerName: “Martin”, Contacts: []Contact{
{CountryCode: 90, MobileNo: 808988},
{CountryCode: 90, MobileNo: 909699}}}
Custs3 := Customer{CustomerName: “Raym”, Contacts: []Contact{
{CountryCode: 75, MobileNo: 798088},
{CountryCode: 75, MobileNo: 965755}}}
Custs4 := Customer{CustomerName: “Stoke”, Contacts: []Contact{
{CountryCode: 80, MobileNo: 805510},
{CountryCode: 80, MobileNo: 758863}}}
db.Create(&Custs1)
db.Create(&Custs2)
db.Create(&Custs3)
db.Create(&Custs4)
customers := &Customer{}
contacts := &Contact{}
db.Debug().Where(“customer_name=?”,“Martin”).Preload(“Contacts”).Find(&customers) //db.Debug().Where(“customer_name=?”,”John”).Preload(“Contacts”).Find(&customers)
fmt.Println(“Customers”, customers)
fmt.Println(“Contacts”, contacts)
//Update
db.Debug().Model(&Contact{}).Where(“cust_id=?”, 3).Update(“country_code”, 77)
//Delete
db.Debug().Where(“customer_name=?”, customers.CustomerName).Delete(&customers)
fmt.Println(“After Delete”, customers)
}
console

 

Many To Many Relationship
User belongs to many languages and ‘user_languages’ will be a join table.

 

Go

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
main.go
package main
import (
_ “database/sql”
“fmt”
_ “github.com/go-sql-driver/mysql”
“github.com/jinzhu/gorm”
)
type UserL struct {
ID int `gorm:“primary_key”`
Uname string
Languages []Language `gorm:“many2many:user_languages”;“ForeignKey:UserId”`
//Based on this 3rd table user_languages will be created
}
type Language struct {
ID int `gorm:“primary_key”`
Name string
}
type UserLanguages struct {
UserLId int
LanguageId int
}
func main() {
db, _ := gorm.Open(“mysql”, “root:root@tcp(127.0.0.1:3306)/testmapping?charset=utf8&parseTime=True”)
defer db.Close()
db.DropTableIfExists(&UserLanguages{}, &Language{}, &UserL{})
db.AutoMigrate(&UserL{}, &Language{}, &UserLanguages{})
//All foreign keys need to define here
db.Model(UserLanguages{}).AddForeignKey(“user_l_id”, “user_ls(id)”, “CASCADE”, “CASCADE”)
db.Model(UserLanguages{}).AddForeignKey(“language_id”, “languages(id)”, “CASCADE”, “CASCADE”)
langs := []Language{{Name: “English”}, {Name: “French”}}
//log.Println(langs)
user1 := UserL{Uname: “John”, Languages: langs}
user2 := UserL{Uname: “Martin”, Languages: langs}
user3 := UserL{Uname: “Ray”, Languages: langs}
db.Save(&user1) //save is happening
db.Save(&user2)
db.Save(&user3)
fmt.Println(“After Saving Records”)
fmt.Println(“User1”, &user1)
fmt.Println(“User2”, &user2)
fmt.Println(“User3”, &user3)
//Fetching
user := &UserL{}
db.Debug().Where(“uname=?”, “Ray”).Find(&user)
err := db.Debug().Model(&user).Association(“Languages”).Find(&user.Languages).Error
fmt.Println(“User is now coming”, user, err)
//Deletion
fmt.Println(user, “to delete”)
db.Debug().Where(“uname=?”, “John”).Delete(&user)
//Updation
db.Debug().Model(&UserL{}).Where(“uname=?”, “Ray”).Update(“uname”, “Martin”)
}
console

 

Here, you will find the complete documentation about gorm http://jinzhu.me/gorm/

What is Blockchain?

What is Blockchain?

what-is-blockchain

Blockchain, bitcoin, cryptocurrency, ethereum all are trending topics these days. Everyone is talking about this even who are not involved with the crypto world. But I think there are thousands of people out there who want to know what the hack is Blockchain. That’s the purpose of this post — trying to describe in the easiest way possible.

 

Blockchain allows participating to create digital ledger (collection of records) and share it among the individuals without the need for a central authority or third party to enforce the rules. The removal of central authority from data structure is one of the powerful aspects of the blockchain.

When you do a transaction, blockchain creates a permanent record and history of the transaction. It’s extremely difficult to change or remove the recorded data. This capability has never existed before. It can create trust in the digital-data world. They create an honest and safe system without the need of the third party or central authority to enforce the rule.

 

What is Blockchain?

 

A blockchain is a data structure that handles distributed database operates by a group of individuals.

The blockchain is a peer-to-peer technology with no central authority or third party.

Still not getting ? or do you think why do we need this?

Let’s consider an example of two friends Taral and Kishan to clear all questions:

Blockchain URL

What is happening here? Kishan and Taral both trusted the bank to manage their money. The bank is there to handle all the transactions. Ultimately we need the third party to manages and handle our stuff.

 

What if we can remove them? The blockchain is here for that, to establish one-on-one trust in the digital world.

 

blockchain digital world

 

The common misconception on bitcoin and blockchain:

The-common-misconception-on-bitcoin-and-blockchain

 

Bitcoin and Blockchain are often used interchangeably, but they’re not the same. The blockchain is a protocol. Bitcoin, Ethereum, Ripple all are using this protocol to enable secure transfer inside a network. Bitcoin is the name of cryptocurrency which runs the bitcoin network.

The blockchain is a technology as we see above. So I am concluding here with the basic introduction on the blockchain. It is giving us the trust factor in the digital world by features which are not presented before. The world is considering blockchain as the new evolution in Digital-world. The possibilities are endless with this technology. Let’s see where it’ll go.

Conclusion:

 

To know more about Blockchain you can contact our author through RemotePanda.RemotePanda is a personalized platform for companies to hire remote talent and get the quality work delivered from the city Pune. The resources in our talent pool are our close network connections. While connecting them with you, we make sure to manage the quality, growth, legalities, and the delivery of their work. The idea is to make remote work successful for you.

Golang Database/SQL

Golang Database/SQL

golang database sql

 

Go Database/SQL

In Chapter-8 of our Golang Tutorial, we touched upon ‘Common Utilities in Project Golang’. In this chapter, let’s explore ‘Go-database/SQL’.

we need to use the “database/sql package, we need to use the “database/sql package. It provides a light-weight interface to connect with the databases.

Basically, to access database in Go, we need to use sql.DB. You need to use this to create statements, transactions, execute queries and fetch results. But keep in mind that, sql.DB is not a database connection. According to Go specification, “It’s an abstraction of the interface and existence of a database, which might be as varied as a local file, accessed through a network connection, or in-memory and in-process”.

The sql.DB performs the below tasks :

1. Opening and closing of the connection with the underlying database driver.

2. Managing connection pooling.

Connection pooling is managed like this, the connection will be marked in use when you are doing something otherwise it will be returned to the pool when not in use. One consequence of this is that if you fail to release connections back to pool, you can cause db.SQL to open lots of connections and that will be running out of resources.

After creating sql.DB you can use this query to the database, as well as for creating statements and creating transactions.

Importing database driver

To use the database/sql you will need package itself and the specific drivers related to the database. You generally shouldn’t use driver packages directly, although some drivers encourage you to do so. Instead, your code should only refer to types defined in database/sql, if possible. This shall help to avoid making your code dependent on the driver so that you can change the underlying driver (and thus the database you’re accessing) with minimal code changes.

In this, we’ll use the excellent MySQL drivers given by @julienschmidt and @arnehormann.

So now you need to import the package to access DB like this,

import DB

 

We’ve used _ qualifier before this third party driver, so none of its exported names are visible to our code.

To use this 3rd party driver, download it from GitHub using go get command.

> go get “github.com/go-sql-driver/mysql”

Now, we’re ready to access the database.

           

As we have already imported the packages, so now you need to create database object sql.DB

To create sql.DB, you use sql.Open() and this will return *SQL.DB object.

Database Package
console

 

Now, here we’ll clarify the things :

1. The first parameter in sql.Open() is driver name of the database. This string registers itself with database/sql and is conventionally the same as the package name. There are other drivers like for sqlite3 github.com/mattn/go-sqlite3  and for Postgres it is github.com/lib/pq

2. The second argument is driver specific syntax that tells the driver how to access the underlying datastore. In this, we are connecting to the employee database in our local database.

database SQL

 

3. You should always check and handle errors that are coming from database/sql operations.

It is idiomatic to defer db.Close() if the sql.DB should not have a lifetime beyond the scope of the function.

As already said, sql.Open() does not establish any connection to the database, nor does it validate driver parameters. Instead, simply it prepares the database abstraction. The first actual connection to the datastore will be established lazily when it’s needed for the first time. If you want to check that the database is available and accessible use db.Ping() and remember to check for errors.

err:=db.Ping() if err!=nil{ //do something here }

           

Even if it’s mandatory to Close() the database object when you’re done with it, sql.DB object is designed to a long life with it. Don’t Open() and Close() databases frequently. Instead, create one sql.DB object for each distinct datastore you need to access and keep it until the program is done accessing that datastore. Pass it around as needed, or make it available somehow globally, but keep it open.

Now, after connection opening, we’ll see the operations to retrieve the resultset from datastore.

Fetching Data from Datastore

Go’s database/sql function names are significant. If a function name includes Query, it means for asking the database to return set of rows, even if it’s empty. Statements that don’t return rows should not use Query functions; they should use Exec().

Now let’s look at how to query the database, working with results. We’ll query the user’s table for a user whose id is 11 and print its id and name. We’ll assign results to a variable, a row at a time, with rows.Scan().

rows.scan()
console

 

  • Here we’re using db.Query() to send a query to the database.

  • We defer rows.Close()

  • We iterate over rows with rows.Next()

  • We read the columns in each row into variables with rows.Scan()

  • We check for errors after we’re done iterating over the rows.

  Some precautions to take

You should always check for an error at the end of for rows.Next() loop.

2. Second, as long as there‘s an open result set(represented by rows), the underlying connection is busy and can’t be used for another query. That means it’s not available in the connection pool. If you iterate over all of the rows with rows.Next(), eventually you’ll read last row and rows.Next() will return an internal EOF error and calls rows.Close() for you. But for some reason you exit that loop – an early return or so on then the rows doesn’t get closed, and the connection remains open. This is an easy way to run out of resources.

3. rows.Close() is a harmless no-op if it’s already closed, so you can call it multiple times. Notice, however, that we check the error first, and only call rows.Close() if there isn’t an error, in order to avoid a runtime panic.

4. You should always defer rows.close(), even if you also call rows.Close(). If there isn’t an error, in order to avoid runtime panic.

5. Don’t defer within the loop.

Preparing Queries

You should always prepare queries to be used multiple times. These prepared statements have parameters that will be passed while executing the statement. This is much better than concatenating strings (Avoiding SQL injection attack).

In MySQL, parameter placeholder is ?  and in PostgreSQL, it is $N, where N is a number. SQLite accepts either of these. In Oracle, placeholders begin with colon and name like parameter1. Here we’ll use? for MySQL.

Queries

 

Here db.Query() prepares, executes and closes prepared statement.

Single Row Queries

If a query is returning at most one row, you can use shortcut around some of the lengthy boilerplate code.

Sample Code ⇒

single row queries

 

Modifying Data and Using Transactions

Now here we’ll see how to modify data and to work with the transaction.

Modify Data

Use Exec() with prepared Statement for INSERT, UPDATE, DELETE or another statement that doesn’t return rows.

modify data

 

Hereafter executing a statement, it gives sql.Result that gives access to statement metadata: the last inserted id and no. of rows affected.

If you don’t want to return result just check the case below.

sql result

 

Here both are not the same If we use db.Query() here then it will return to sql.Rows and it will keep the connection open until the closing of the connection.

Working with Transaction

In Go, a transaction is an object that reserves a connection to datastore. It guarantees that all the operation related to the same connection will be executed.

Here, you need to begin a transaction by calling db.Begin(), and close it with Commit() or Rollback() method on resultant tx variable. Under the covers, tx gets a connection from pool and reserves it to use only with the transaction.

Prepared statements created in a transaction are bound to the same transaction.

The main things to remember here about the transaction are:

The tx object could remain open, reserving connection pool and not returning it.

While working in a transaction you should care about not to make calls on DB variable. Make all of your calls to the tx variable only that you created with db.Begin() because DB is not a transaction, tx is the transaction. If you try to make calls on DB variable then those calls will not happen inside the transaction

working with transaction

 

Using a Prepared Statement

Prepared Statement and Connections

A prepared statement is a SQL statement with parameter placeholders which is sent to the database server and prepared for repeated execution. It’s a performance optimization as well as a security measure; it protects against attacks such as SQL injection, where an attacker hijacks unguarded string concatenation to produce malicious queries.

In MySQL, as well as in most databases, you first send the SQL to the server and ask for it to be prepared with placeholders for bind parameters. The server responds with a statement ID. You then send an execute a command to the server, passing it the statement ID and the parameters.

At the database level, prepared statements are bound to a single DB connection. The typical flow is like client sends a SQL statement with the placeholders to the server preparation, the server responds with statement ID and then the client executes the statement by sending ID and statement.

In Go, connections are not directly exposed to database/sql package. You need to prepare a statement on db or tx object but not directly on a database connection.

When a statement is prepared it is prepared on a connection in the pool.

2. The Stmt object remembers which connection used.

3. When you execute Stmt, it tries to use the connection. If not available then it gets another connection from the pool and re-prepares the statement with db.

As here due to re-preparation of statements then there will be high concurrency usage of db, which may keep connection busy.

prepared statement in transaction

 

Below are the parameter placeholder syntaxes which are database specific. Consider comparison MySQL, PostgreSQL, Oracle.

MySQL PostgreSQL Oracle ===== ========= ======= WHERE col=? WHERE col=$1 WHERE col=:col VALUES(?,?,?) VALUES($1,$2,$3) VALUES(:val1,:val2,:val3)

Handling Errors

Almost all the operations in database/SQL types return an error as last value. You should always check the error, never ignore them.   Some special error behaviors are there that you might know

Error from the iterating resultset

Consider the following code:

Handling error

 

Errors from closing Resultsets

You should always close sql.Rows explicitly, if you exit the loop prematurely. It’s auto closed if the loop exits normally or through error, but you might accidentally do this.

Errors from closing resultsets set

 

The error returned by rows.Close() is the only exception to general rule that its best to capture and check for errors in all DB operations. If rows.Close() returns an error, it’s unclear what you should do.

Errors from QueryRow

Consider the following code to fetch a single row:

Error from Query Row

 

What if there was no user with id = 1? Then there would be no row in the result, and.Scan() would not scan a value into the name. What happens then? Go defines special error constant sql.ErrNoRows which is returned from QueryRow() when the result is empty. This needs to be handled. An empty result is not considered an error by application code, and if you didn’t check whether an error is a special constant then, you’ll cause application code errors.

Errors from a query are deferred until Scan() is called, and then returned from that.

Errors with Query Row

 

Working with NULLS

Nullable columns lead to a lot of ugly code. If you can, avoid them. If not then you need to use special types from database/sql package to handle them or to define your own.

There are types of nullable booleans, strings, integers, and floats. Here’s how to use them.

Working with Nulls

 

But there are some limitations and reasons to avoid nullable columns

There’s no sql.NullUint64 or sql.Null your favorite type. You need to define your own for this.

2. Nullability can be tricky and not future-proof. If you think something won’t be null, but you’re wrong, your program will crash.

3. One of the nice things about Go is having a useful default zero-value for every variable. This isn’t way nullable things work.

The Connection Pool

The connection pooling is provided by database/SQL package. Connection pooling is the mechanism of maintaining a pool of connections and reusing those connections. It is used in enhancing the performance of executing commands on the database. It facilitates reuse of the same connection object to serve a number of client requests.

Every time a client request is received, the pool is searched for an available connection and it’s highly likely that it gets a free connection. Otherwise, either the incoming requests are queued or a new connection is created and added to the pool (depending on how many connections are already there in the pool). As soon as a request finishes using a connection, it is given back to the pool from where it’s assigned.

Some useful things to know about connection pooling here :

Connection pooling means that executing two consecutive statements on a single database might open two connections and execute them separately. For example, LOCK TABLES followed by an INSERT can block because the INSERT is a connection that does not hold the table lock.

2. Connections are created when needed and there isn’t a free connection in the pool.

3. By default, there’s no limit on the number of connections. If you try to do a lot of things at once, you can create an arbitrary number of connections. This can cause the database to return an error such as “too many connections.”

4. In Go 1.1 or newer, you can use db.SetMaxIdleConns(N) to limit the number of idle connections in the pool. This doesn’t limit the pool size, though.

5. In Go 1.2.1 or newer, you can use db.SetMaxOpenConns(N) to limit the number of total open connections to the database. Unfortunately, a deadlock bug (fix) prevents db.SetMaxOpenConns(N) from safely being used in 1.2.

6. Connections are recycled rather fast.

7. Keeping a connection idle for a long time can cause problems. Try db.SetMaxIdleConns(0) if you get connection timeouts because a connection is idle for too long.

Connection Pool