MODS 2017 vignettes – Day 2

Alright, here I am, attending MODS 2017 day 02. Checkout my post on MODS day 01 experience here (for those who don’t know about MODS check it out here). Yesterday was a blast, MODS 2017 day 1 was so overwhelming it scared me a little bit. I learned more than I thought I would, I got to know about a lot of new tech and trends. My expectations today are off the charts. Can’t wait to attend the sessions. Like yesterday, I am gonna do a live blogging about my experience. Here it goes, first session is on AI.

Data is the new oil, but that oil is AI. Mr. Ajit Jaokar needs no introduction. (for sake of completeness here is a brief intro) Ajit is well-known personality in the field of AI and machine learning, teaches data science for IoT at Oxford and more. Ajit explains about AI and future of AI, progress is inevitable, machines will lie and cheat in the future if needed. A very good intro of deep learning, Top-down and bottom down approach and related challenges. Bottom up approach has no rules. In 4 -5 years market will be more vibrant and organised, this is the best time to learn AI and Machine learning. One of the most curious questions “Types of problems that AI can address”, few examples are: Complex planner: tasks which require planning, better communicator (chat bots, remember that racist Microsoft bot), new perception, Enterprise AI, ERP and data warehousing, super long sequence pattern recognition etc. AI can be anywhere, I mean can you think of using AI for cucumber farming ??? to sort good and bad cucumbers, applications and area are limitless, not even sky is the limit (probably not even milky way galaxy is the limit with AI). Data is the new oil indeed, no data, no training. AI will impact almost everything soon. Machines training machines is happening as we speak. Great insights on AI and deep learning. Thank you Mr. Jaokar. Cannot think of any better way to start #mods17 day 02.

Short session (15 minutes) on Karnataka start-up policy. Mr. Aniket Vaidya is the speaker. Karnataka’s vision is to create the best ecosystem for startups in the capital. There are 4000+ startups in Bengaluru itself. Govt is taking several steps in making business for startup easier. Aniket makes audience aware of the benefits and conditions provided by Karnataka govt for tech startups. Get funded by the Karnatka govt. Neat initiative and a great push towards make in India.

Next up is Building a BFF (Backend for Frontend) with swift on server by Mr. Pushkar Kulkarni. If you have followed my blog for #mods17, you know I love swift. No surprises that this session sounds very lucrative to me. A general purpose API (monolith or microservice) may not be apt for mobile clients. The thought provoking question here is “Is one backend for several clients right and peaceful?”. The solution is one backend per experience. Speaker clearly communicated swift history and evolution. Swift is already available on Darwin platforms, a great many frameworks (Kitura, Vapor, perfect) and support for swift on linux. Thanks to the swift on web session by Joshua Smith yesterday, this all makes sense and looks very promising. Swift on server is comparable to java when it comes to performance, memory wise swift performs 2x better than java. Swift is fast, swift is ideal for cloud. Swift is ideal if your client is iOS. One language for client and server greatly increases productivity. An amazing follow up after yesterday’s swift for web. Very well organised session, I clearly understood the basics, benefits and proposed architecture. Thanks Mr. Pushkar.

This one is on Data science. Become an expert data scientist by Mr. Rajesh K Jeyapaul. Great start with basics. Deep learning is part of machine learning, which is a part of artificial intelligence (think Venn diagram). High computational power is needed as we need to deal with a large number of data sets. Machine learning, Natural language processing (NLP), vision etc are all part of artificial intelligence. Great video of Robo speaking conversing with human #AIChronicles, looks like stuff from Mission Impossible movie, can’t believe that was real. For Deep learning and machine learning, multi skills are needed, collaboration is the key. PCA or principle component analysis help select data that is right for training and reject rest of the data. Mr. Rajesh makes us aware of various libraries for machine learning and how to reduce dimensions and how to ultimately predict better. I am no data scientist, but now I know the basics, the libraries and algorithms to use (scikit, pandas, PCA) and where to go from here. Great thought provoking session. Thanks Rajesh.

Next one is from mobile domain. Architecting mobile app security development using OWASP top 10 by Mr. Rohit Bhardwaj. This session is of a great importance to me as I am a mobile app developers. 84% of all cyber attacks happen in application layer. Rohit has clearly communicated the threats and their seriousness and how it affects us globally. Threats in mobile browser: phishing, Framing, Man-in-the-middle etc. Mobile is vulnerable from browser, malware app, WiFi/GSM, App memory. Minimize surface attacks, use ports 80 and 443 only, change all the defaults (like 22 for SSH). Defensive programming should be used. Separation of duties, separate privileges for separate roles is another way to protect. Another way is to fix security issue correctly, fix it and test it. Rohit explains the changes in OWASP top 10 list from 2013 to 2017 and how to tackle it. A live example of SQL injection was shown, very interesting to actually view it live. Prepared statements is how you prevent SQL injection. Another live example of session hijacking by script injection in search. Cross site scripting attack can be avoided by not including user provided input in output page. Avoid insecure direct object references. Security misconfiguration is another vulnerability, encrypt sensitive data. To avoid cross site request forgery (CSRF) adda secret key and add it while calling the APIs. Unvalidated redirects and forwards is another way an attacher can misuse data, restrict that. For mobile take these into consideration: Insecure communication, poor authorization and authentication, unintended data leakage, treat geolocation data carefully (don’t store is not required), implement OAuth2 or JWT (web token), Reduce run time manipulations use (c/c++ libs in iOS and JNI in android), securely store the sensitive data in RAM. Perform threat modelling for security. checkout securecoding.cert.org, greenbone, twit.tv, cybray. One of the best sessions of #mods17. Thanks Rohit.

I haven’t attended any of the deep dive sessions of MODS 2017. I am not gonna miss the last deep dive session. Unit testing iOS applications part 1, 2, 3 by Mr. Steve Scott (Scotty). This part 1, 2 and 3 is a 180 minutes session, this paragraph is gonna be very long. I have been learning iOS app development for about 4-5 months now and learning unit testing seems pretty normal for me. I am sure Learning unit testing iOS apps will enable me to do a lot more (like say Test Driven Development (TDD)). Scotty clearly stated why unit test apps. There are tons of benefits. Unit testing makes us think how to write testable code, how to manage code, and that I believe is a beautiful code (like poetry). Tests are not there to prove what you have written is right, tests are there to show what might break later on. Write @testable import target to access internal properties and functions. There can be classes inside functions, with visibility inside function body. Test expectation can be used for asynchronous and dependent code testing. Scott shows simple and complex examples of iOS unit testing. Scotty explained how to mock network layer and data and why mocking is needed and why data consumers need not care where data is coming from. Tell the component what to use (default or custom properties) don’t let the component ask what to use, key to testable code. This is just 120 minutes of the session, I have an early flight and will have to leave in mid of session part 3. I am sure Scotty is gonna present more advanced and pragmatic testing features. It was great to attend iOS unit testing deep dive Scotty, thank you. And apologies I am not including part 3 of the deep dive

Alright with this I am out of sessions to write about today. I wish MODS was a week long conference. I have learned so much, all the sessions I attended were thought provoking. Makes audience think about various paradigms and approaches. Thank you salt march team for such a wonderfully organised hand crafted event.

Can’t wait for MODS 2018.

Kaushal signing off from #mods17

Kaushal Dhruw (@drulabs github/twitter/stackoverflow)

 

MODS 2017 vignettes – Day 1

This is the first time I am attending MODS conference (for those who don’t know check it out here). Needless to say I have no frame of reference, I do have some high hopes though (because I have seen the schedule and it looks awesome). I won’t call myself a veteran mobile app developer, but yeah, for a veteran mobile app developer like me this conference is heavenly. Everything that I love and interested in and more is covered, schedule includes sessions and topics ranging from android, iOS, swift, functional and reactive programming, IoT, Data Science. I just can’t wait to attend it all (unfortunately I can’t, 3-4 sessions in parallel). Got my badge and schedule without any bugs :p. Loving #mods17 so far. Below is my live experience of #MODS17 sessions (intentionally written in past tense).

Attending the Keynote right now. Scott Davis, the speaker of “It’s spelled ‘Accessibility’, not ‘Disability‘” has definitely made the session interesting by starting it with light humour and funny comments. I have not actually thought about it, but this session makes the audience (I mean me) think about design from a new perspective. A sensitive topic presented in a very well and organised fashion. It is not just accessibility, Scott explained architecting and designing with apt examples and comments. This session makes us aware of different dimensions that needs to be considered before development. This session has made me a firm believer of the fact “If you design for accessibility, everyone benefits, not just disabled“. Well I didn’t know what to expect like 1 hour ago, and now the expectation is off the charts. Thank you Scott. Awesome start for MODS 2017.

Emerging Architectures for Digital transformation. Mr. Kumar shared great insights in devops domain. Great examples of microservices, monolithic arch, kubernetes. I am not much of a devops guy, but now I know about the latest tech n trends in the area. My favourite quote from this session is “Everyone’s container journey starts with one container “.

Next up: swift for web. Joshua Smith does know how attract audience attention. Swift is one my favourite programming languages, reasons are two: extensions and Protocol oriented programming. The one thing that was missing was using swift for server development. Kitura makes it possible. Kitura is open source and is inspired by express (a popular nodeJS companion for web app development). What more can I ask for, I get the power of swift and express and develop web apps. Bam!!! a divine gift I might add. I have been using swagger for some time now, Josh conveyed the message in an excellent manner in layman and veteran terms. Josh showed how to use Swift, Kitura, Swagger, CouchDB, Docker etc to create a wonderful web app with examples (I loved the explanation). The other great part of this session is great explanation of swift4 and known issues with examples. Heck!!! didn’t even know about Vapor and swift for android apps. Very knowledgeable session, thanks Joshua smith.

Another awesomeness is next: Applying functional insights without losing swift. I have been following Rob Napier for some time, he explains a lot of stuff in an excellent easy to understand way. Here is a summary of his session on “Applying functional insights without losing swift”. Functional programming is not a bunch of symbols or expressions, it is a programming paradigm. Swift provides a number of ways to implement functional programming in an easy to read way. At the end of the day what matters is performance and maintainability. Using functional programming is great way to do it. A lot of what we do when creating iPhone, iPad and macOS apps is opposite of what is described n design principles. Using UIViewController as data sources and other delegates defies SRP and probably other principles. Design your components so that those are rigid enough to fill the component and flexible enough to be replaced easily. Don’t fight swift and make it something that it is not, use its features to make code more robust and maintainable. Stronger types means fewer tests. Strong types means fewer values a variable or constant can contain. Make stronger types likes struct and use when types are weak. Types should be strong and protocols should be generic. The session was mostly theoretical with lots of meaningful suggestions. A concrete example would have made it perfect.

Another session by Joshua Smith: “Microservices in swift“. After a nicely architected session on “Swift for the web“, in this session Josh describes microservices, Service oriented architecture (SOA) and microservices in swift. . Microservices are focussed, easy to develop, maintain and deploy. Josh explained API Gateway, service discovery tools, virtual machines (highly optimised I/O) in relevance with microservices. Apache OpenWhisk is open source, event driven, serverless (or FaaS) and supports swift.  gRPC protocol buffers are an alternative to REST. As I mentioned earlier I am not much of a devops or backend guy, but if I can use swift to create web apps, I will become one. Another great session by Josh.

Next is an android session: Implementing functional reactive programming on android using kotlin. This one has got my entire attention as I am basically an android developer, early kotlin adopter, functional programming fan and inline to learn reactive programming. Jonathan Pereira is the speaker. The session started with a little history and java. Functional reactive is different from reactive programming though. Functional reactive programming is adding functional operators to reactive programming. Functional reactive programming helps with concurrent and asynchronous operations. The speaker clearly stated why prefer kotlin over java for android development and advantages of functional reactive programming. RxKotlin is reactive extension of kotlin. I have used like 10% kotlin in production and after this session I feel like functional reactive programming with kotlin is what is gonna take my app to the next level. Man!!! Why have I not been using it from the beginning ? Thanks Jonathan, just the push I needed.

Next in line: Mobile, AI and TensorFlow lite. Supriya Srivatsa is the speaker for this session. AI is the future, we see it every where or rather we will see it everywhere in near future. As per my understanding TensorFlow is a machine learning framework and requires huge computation power and resources, with tensor flow lite we can do AI and machine learning in our smart phones. This session is short (30 minutes), speaker presents the ideas and concepts in a nice and easy to understand fashion. Supriya makes the audience aware of all the various methods and approaches to take for prediction on mobile. Quantize the model, size it down to a reasonable size and ship the prediction model with the app. The implementation was just what I wanted before the session concludes.

The last talk I am attending today is “It’s time for accessibility“, another talk by Scott Davis. Unfortunately I have finish this blog post before this session finishes. As in previous talk, Scott starts the session with light humour, this time about rains and Bengaluru traffic jams. Gets everyones attention with a 2 line joke. This is continuation of the the accessibility talk that I wrote in the 2nd para. The whole point is to build it for everyone.

With all these talks I can definitely say that #mods17 is totally worth me flying from Pune to Bengaluru. I had no frame of reference today morning, and now I am taking a lot more than I thought I would. An amazing arrangement by salt march team. Would love to attend the session every year.

Stay tuned for day 2.

Kaushal Dhruw (@drulabs github/twitter/stackoverflow)

Go ServerLess with Firebase cloud functions

Firebase Cloud function

With announcement of cloud functions beta at google cloud next 2017 event, google has added one of the highly requested features in the firebase suite. This is one major step from google in making firebase serverless. In this post we will see some of the capabilities, pros and cons, setup and deployment of firebase cloud functions. Google IO is just days away, knowing about firebase is surely going to help in understanding upcoming firebase features.

Serverless computing, also known as function as a service (FaaS), is a cloud computing code execution model in which the cloud provider fully manages starting and stopping of a function’s container as necessary to serve request. Rather than per virtual machine, requests are billed by an abstract measure of resources required to satisfy the request. The name serverless architecture / computing does not mean no servers are involved, it means the maintenance, scaling and management of servers is handled by the provider so developers can focus on single purpose service and core business problems. The first major serverless offering was back in 2014 by Amazon as AWS Lambda.

Firebase cloud functions (Beta) are relatively new (only a couple of months). Pricing is in 3 tiers, the free tier includes a generous usage limit so you can take an informed decision after trying it out. Since it is backed by Google, easy integrations available with google services like Compute engine and analytics. According to official documentation it is a glue between firebase and cloud services (outbound networking not available in free tier).

And now to answer the big question, how does it work ? You write a piece of JavaScript code (that exposes some functions) that gets stored in Google cloud and runs in a managed node.js environment. These cloud functions are triggered by changes in database, storage; analytics events, new user signups etc. Firebase even provides a way to make it http triggered. Here are few of the capabilities of cloud functions:

  • Real-time database triggers
  • Firebase authentication triggers.
  • Firebase analytics triggers.
  • Cloud storage triggers.
  • Cloud Pub/Sub triggers.
  • HTTP Triggers.

This is a powerful set of features, here are a few of the use cases:

  • Sanitize abusive text content from database (with real-time database triggers).
  • Send welcome email to new users (with authentication triggers).
  • Send a gift coupon as push notification to a user who just purchased something from your app (Firebase analytics triggers).
  • Create and store thumbnail of images, filter out abusive images (Cloud storage triggers, with cloud vision API)
  • Use networking library of your choice to trigger cloud function to send push notification, get data etc. (HTTP triggers)

Setting up firebase cloud functions: (Assuming you already have a firebase base setup in console)

  • Install node.js and npm.
  • Install the latest firebase command line interface (CLI)
    • npm install -g firebase-tools
  • Log in to firebase
    • firebase login
  • initialize firebase project workspace
    • firebase init

      (pick only functions, default selection includes database and hosting as well, select yes for installing dependencies)

This initializes the current directory (that your terminal is pointing to) as firebase workspace for the selected project and creates a directory named functions. functions directory has following contents:

  • package.json – Used by node to describe the project. All the project dependencies are listed here. By default it lists “firebase-admin” and “firebase-functions” as dependencies.
  • node_modules – This is where npm keeps all its dependencies. If you have worked with node.js you already know about this. No need to checkin the entire folder on source control.
  • index.js – This is the entry point for defining all the cloud functions. This is where the server code resides.

Once this is done, create your cloud functions in index.js. Below are a few examples. Lets see some examples first before deploying the cloud functions.

HTTP triggered cloud function:

This cloud function is just performing some random task based on the request. Like reject PUT requests, return user count from real-time database with or without the name parameter that gets passed in the post request. The point is: It can be triggered via a REST end point and has access to firebase features like real-time database.

'use strict'
var functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);

// http triggered cloud function
exports.triggerHttp = functions.https.onRequest((req, resp) => {
    // Forbid PUT requests
    if (req.method === 'PUT') {
        console.log("Forbidden response sent");
        resp.status(403).send('Forbidden!');
    }
    if (req.body.name) {
        return admin.database().ref('/users')
         .once('value').then(allUsers => {
             console.log("NAMED post body");
             resp.send('hi '+req.body.name+'. Total users: ' + 
             allUsers.numChildren());
         });
    } else {
       return admin.database().ref('/users')
           .once('value').then(allUsers => {
              console.log("no name post body. user count sent");
              resp.send('Empty post body. Total users: ' +      
              allUsers.numChildren());
            });
    }
});

Real-time database triggered cloud function:

These type of triggers can be attached to any node of the firebase real time no-sql JSON database. For every change in that node, the function is triggered. Here is an example of use case discussed above, sanitize abusive text from database.

database_triggered

// Sanitizing comments.
exports.sanitizeComments = functions.database.
    ref('/comments/{commentId}').onWrite(event => {

    const commentId = event.params.commentId;
    const post = event.data.val();
    console.log('commentId: ' + commentId + ": " + post.text);

    if(post.sanitized){
        // prevent infinite looping
        return;
    }

    post.sanitized = true;
    post.text = sanitize(post.text);
    return event.data.ref.set(post);
});

there are a couple of points to note here. The trigger is attached to node root/comment/commentId which has the following child structure

{"artifactId": true,
    "text": "This is a sample comment. change crazy to lovely and stupid to wonderful",
    "commenter": "user_20jd2j2jfj",
    "sanitized": false,
    "timestamp": 1493815657159
}

So for every comment that gets added we can extract the comment id (event.params.commentId)  and data (event.data.val()) from data as shown in JavaScript function above. As soon as this comment is added (the JSON above), the function is triggered and it over writes the data with a sanitized text. To prevent infinite looping (as writing sanitized data in database will again trigger the function and will keep on going till your quota is exhausted) the if check with sanitized flag is in place. Now since writing to database is an asynchronous operation, it returns right away before performing the write operation. To handle this, cloud functions must return a JS Promise, which is succeeds or fails when the asynchronous operation (like write) is complete. This enables firebase to decide whether the operation has been terminated. Refer this link to learn more about JavaScript promises.

In one my apps I am using real-time database triggers to send push notification about a new post. Image below shows sending FCM when a user gains a new follower.

new_follower_triggered_fcm

Firebase Auth triggered cloud function:

Among many offerings, firebase also provides an auth SDK that enables app developers to integrate social login with major identity providers like facebook, google, twitter etc. Once a user logs in with one of these or signs up via email (the firebase way) an event is emitted, this can be used to send welcome email to new users.

exports.sendWelcomeEmail = functions.auth.user()
    .onCreate(event => {
        const user = event.data; // The Firebase user.
        const email = user.email; // The email of the user.
        const displayName = user.displayName; // The display name of the user.
        return sendWelcomeEmail(email, displayName);
});

The function definition states that this event will be triggered when a new user is created. To implement sendWelcomeEmail function any 3rd party providers (paid tiers) or gmail can be used.

Firebase storage triggered cloud function:

When a new image is uploaded, create a thumbnail of the image and send it to client:

storage_triggered

Deploying cloud functions:

  • deploy all – database rules, storage rules, functions and hosting data
    • firebase deploy
  • Deploy partially
    • firebase deploy --only functions

Deployment can take several minutes. After successful deployment, you should see a screen similar to this:

cloud_function_deployed

Notice the function URL for http triggered cloud function. Use this as http end point for triggering.

The Pros:

  • Cloud functions are very helpful in keeping all the business logic centralized in one place and not in multiple clients.
  • Firebase real-time database makes it insanely easy to develop MVP. Database triggered cloud functions enables solving many issues that will otherwise need an app update.
  • No need to learn any other language or server maintenance (that is why it is serverless)
  • Value for money: free and paid quota are much better than what is available out there.
  • Single command for initialization and deployment.

The Cons:

  • Maintaining cloud server code can become a nightmare as your business logic becomes complex.
  • No control over instances.
  • In case of an error like the infinite loop issue mentioned above, there is no alert.
  • Firebase doesn’t support complex database queries yet, this restricts what can be done with firebase functions.
  • Only one language choice as of now.
  • Cloud functions is still in beta and I wouldn’t recommend it for production.
  • To test outbound network requests a paid plan is needed (Flame or Blaze)

I have open sourced 2 apps that uses firebase:

  • SyncroEdit:  A collaborative note editing android app that uses realtime database (check it out here)
  • Vividity: Photo sharing android app using firebase (check it out here). This has an index.js file at its root, covers some of the use cases discussed here.

Let me know what you think. Got any questions? shoot below in comments.

-Kaushal D (@drulabs twitter/github)

drulabs@gmail.com

WiFi direct service discovery in android

wi-fi-direct-service-discovery

I suggest reading my previous posts on NSD and WiFi Direct before reading this one. This post requires some of things discussed in these two posts. If you are already aware of WiFi direct service discovery, you  can directly check out my sample code on git.

Like NSD, we can register and discover services over WiFi direct. The problem I faced with WiFi direct was one prefixed port for initial data transfer (as peer devices were not aware of the port information), after that the port was dynamic, and fixed port was released. Using WiFi direct service discovery we can append additional data (100-200 bytes)  with the advertised service. So unlike WiFi direct, we can request port dynamically and append it with service. No prefixed port.

We all know socket communication, here is a recap just in case:

//Server side
ServerSocket mServer = new ServerSocket(mPort);
Socket socket = null;
while (acceptRequests) {
    // this is a blocking operation
    socket = mServer.accept();
    handleData(socket);
}

//Client side
socket = new Socket(hostIP, hostPort);
OutputStream os = socket.getOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(os);
oos.writeObject(transferObject);
oos.close();

If you have read my earlier post on WiFi direct, you know almost everything required for WiFi direct service discovery (except appending additional data part). Function is similar to NSD and code is similar to WiFi-Direct.

Adding local service

Same class as WiFi direct is used. Here we use WifiP2pmanager‘s addLocalService() method. Here is the official documentation for this method:

wifip2pmanager_addlocalservice

All the parameters are familiar except for WifiP2pServiceInfo. Similar to NSD, this is a holder object for service information. In this case it is a WiFi P2P service info.

WifiP2pServiceInfo is a class for storing service information that is advertised over a WiFi peer-to-peer setup. It has two direct sub-classes. Both are bonjour service info:

For our use case, which sharing of port and IP information and then finally sharing data with other devices, we will use the former, it allows us to append a string map with the service. The later allows us to append a list of string, which can also be used.

As mentioned earlier, WifiP2pDnsSdServiceInfo allows us to append a string map. here is how to do it.

Map<String, String> record = new HashMap<String, String>();
record.put(KEY_BUDDY_NAME, player == null ? Build.MANUFACTURER : player);
record.put(KEY_PORT_NUMBER, String.valueOf(port));
record.put(KEY_DEVICE_STATUS, "available");
record.put(KEY_WIFI_IP, Utility.getWiFiIPAddress(context));

WifiP2pDnsSdServiceInfo service = WifiP2pDnsSdServiceInfo.newInstance(
    SERVICE_INSTANCE, SERVICE_TYPE, record);
wifiP2pManager.addLocalService(wifip2pChannel, service, new WifiP2pManager.ActionListener() {

    @Override
    public void onSuccess() {
        Log.d(TAG, "Added Local Service");
    }

    @Override
    public void onFailure(int error) {
        Log.e(TAG, "ERRORCEPTION: Failed to add a service");
    }
});

As you can see in the sample code above, a map called record is passed when creating instance of service info object. After this you are done with advertising your WiFi direct service.

Discovering WiFi direct services

Discovery requires adding a service discovery request in WiFi direct via WifiP2pManager‘s addServiceRequest() method. Here is description from the official site:

wifip2pmanager_addservicerequest

After this a the service discovery request must be issued. Set the type of service you want to discover DNS-SD or UPNP and issue the request.

serviceRequest = WifiP2pDnsSdServiceRequest.newInstance();
wifiP2pManager.addServiceRequest(wifip2pChannel, serviceRequest,
new WifiP2pManager.ActionListener() {

    @Override
    public void onSuccess() {
        Log.d(TAG, "Added service discovery request");
    }

    @Override
    public void onFailure(int arg0) {
        Log.d(TAG, "ERRORCEPTION: Failed adding service discovery request");
    }
});
wifiP2pManager.discoverServices(wifip2pChannel, new WifiP2pManager.ActionListener() {

	@Override
	public void onSuccess() {
		Log.d(TAG, "Service discovery initiated");
	}

	@Override
	public void onFailure(int arg0) {
		Log.d(TAG, "Service discovery failed: " + arg0);
	}
});

There is a reason code associated with every failure callback. Check those error code if you get a failure callback.

The map that was advertised is received in DnsSdTxtRecordListener interface callback. This needs to be set to receive the map. And DnsSdServiceResponseListener is callback for receiving the advertised service. Here is how to set it:

wifiP2pManager.setDnsSdResponseListeners(wifip2pChannel,
new WifiP2pManager.DnsSdServiceResponseListener() {

    @Override
    public void onDnsSdServiceAvailable(String instanceName,
        String registrationType, WifiP2pDevice srcDevice) {

        // A service has been discovered. Is this our app?
        if (instanceName.equalsIgnoreCase(SERVICE_INSTANCE)) {
            // yes it is
         } else {
            //no it isn't
        }
    }
}, new WifiP2pManager.DnsSdTxtRecordListener() {

    @Override
    public void onDnsSdTxtRecordAvailable(
            String fullDomainName, Map<String, String> record,
            WifiP2pDevice device) {
        boolean isGroupOwner = device.isGroupOwner();
        peerPort = Integer.parseInt(record.get(TransferConstants.KEY_PORT_NUMBER).toString());
			// further process
    }
});

here in the text record callback you can see the map is received with whatever info was set. After this it is same as WiFi direct. you can check out my earlier post for this.

You need to have a connection info listener same as WiFi direct example (old post), and you need to connect with the device, with info received as a part of DnsSdTxtRecordAvailable or DnsSdServiceAvailable callbacks (above code example).

In the code example above you can see that the port information is received as a part of string map in text record available method, and in connection info callback you can get the IP address of the group owner of the current WiFi group. After that, its the same age old socket communication.

Check out the google sample for this. My sample app source is available on git.

Happy coding!!!

Let me know what you think. Got any questions? shoot below in comments.

-Kaushal D (@drulabs twitter/github)

drulabs@gmail.com

Local networking in android – WiFi direct

wifi-direct

In my earlier blog post I discussed data sharing between two android devices in same network using NSD. In this post I will explain communication between two non-connected android devices (can be connected to same or other network, doesn’t really matter) via WiFi direct. Devices should be in WiFi range. I will start with a bit of theory about WiFi direct and then we will see how it is implementable using android APIs (Sample app source code git link is at the end of this post).

Same as earlier post the problem addressed is sharing IP and port information. Communication again will be socket communication.

WiFi direct is a WiFi certified standard enabling devices to connect with each other without requiring a wireless access point (a.k.a router or WiFi hot spot). Using this, devices can communicate with each other at typical WiFi speeds, unlike ad hoc wireless connections (which allows two or more devices to connect and communicate, and has a limit of 11 Mbps) or Bluetooth, setup required for WiFi direct is much simpler. Here each member is assigned a limited access point and other members connect to it as regular clients. WPS and WPA2 are used for encryption and keep the communication private.

The peer device acting as current access point is said to be assuming a group owner role in WiFi direct group. A WiFi direct group consists of 1 group owner and devices or peers connected to it as clients (P2P clients). The group owner device sets properties for communication like operating channel, whether the group is persistent, encryption type etc. Any compatible device with right hardware and android ICS or above (API 14+) can assume group owner role. After role negotiation (group owner or P2P client), devices assume their decided role, and group owner starts operating in access point mode (this access point will not be visible under available networks of mobile devices).

Now coming to android. Android’s WiFi P2P framework complies with WiFi direct certification program. It consists of:

  • Methods that allows us to discover, request and connect to peers (android.net.wifi.p2p.WifiP2pManager).
  • Listeners that notifies us of success and failure of WifiP2pmanager’s method calls.
  • Intents to notify specific events, such as new peer joined, connection dropped etc. (like WIFI_P2P_PEERS_CHANGED_ACTION and WIFI_P2P_CONNECTION_CHANGED_ACTION)

Discovering peers

You need the following permissions for using WiFi direct for communication

    <uses-permission android:name="android.permission.INTERNET" android:required="true"/>
    <uses-permission android:name="android.permission.ACCESS_WIFI_STATE" android:required="true"/>
    <uses-permission android:name="android.permission.CHANGE_WIFI_STATE" android:required="true"/>

Internet permission is required for using sockets anyway.

wifiP2pManager.discoverPeers(wifip2pChannel, new WifiP2pManager.ActionListener() {
    @Override
    public void onSuccess() {
        // discovery has been successfully started
    }

    @Override
    public void onFailure(int reasonCode) {
        // discovery failed to start. checkout reason code
    }
});

The above method is all you need to start the WiFi direct peer discovery. If you get a callback in failure method, check the reason code for it.

Getting peer list

Getting peer list is tricky, as WifiP2pManager will not give you that. This is done via broadcast receivers registered dynamically (not in the manifest file).

wifi-direct-broadcast-receiver-01

The second filter action in above code (the image) is what is triggered when a new peer has joined (or left). When broadcast with this action is received we can request peers from WifiP2pManager. It takes a callback to return entire peer list not just the new ones, so if you are displaying a list, clear the list and add all the peers received in the callback. here is the broadcast receiver’s code:

if (WifiP2pManager.WIFI_P2P_PEERS_CHANGED_ACTION.equals(action)) {
    // request available peers from the wifi p2p manager. This is an
    // asynchronous call and the calling activity is notified with a
    // callback on PeerListListener.onPeersAvailable() of passed activity
    // the activity implements the listener interface
    if (wiFiP2pManager != null) {
        wiFiP2pManager.requestPeers(channel, activity);
    }
}

The callback that is overridden will be called asynchronously. Peer list can be extracted from there.

@Override
public void onPeersAvailable(WifiP2pDeviceList peerList) {
    List<WifiP2pDevice> devices = (new ArrayList<>());
    devices.addAll(peerList.getDeviceList());

    //do something with the device list
}

So now we have the peer list. We can display this list to user or call connect with each peer. Once the peer is discovered we need to send a connection request to connect and form a WiFi direct group.

Connecting with a peer

This step requires WiFi mac of the device. It is received as a part of peer info in discovery step. Here are the properties of WifiP2pDevice:

wifip2pdevice_states

The WifiP2pDevice list that we received in OnPeersAvailable() method of PeerListListener callback has exactly what we need to make a connection request.

Here is how to make a connection request using WifiP2pManager.

WifiP2pConfig config = new WifiP2pConfig();
config.deviceAddress = //set device address from WifiP2pDevice received;
config.wps.setup = WpsInfo.PBC;
config.groupOwnerIntent = 4;
wifiP2pManager.connect(wifip2pChannel, config, new WifiP2pManager.ActionListener() {
    @Override
    public void onSuccess() {
        // Connection request successfully sent
    }

    @Override
    public void onFailure(int reasonCode) {
        // Failed to send connection request.
    }
});

Just create a config object set channel properties and call the connect method. The group owner intent value is for setting the probability of the device to become group owner. It’s value varies from 0-15. 15 means highest chance of the connection request sender device to become group owner. But whatever code you write, you must handle both group owner and a regular P2P client scenario.

Once the request is sent to a device, the device user must accept the connection request from system prompt. After that role negotiation happens and devices move to their decided roles. After successful connection, a connection info request can be issued via WifiP2pManager. Which sends callback to WifiP2pManager.ConnectionInfoListener. It has only one method:

wifip2pmanager-connectioninfolistener

This connection info object WifiP2pInfo contains the IP information of group owner. Check below:

wifip2pinfo

So now, our first problem is addressed, here we got the IP address of the group owner. And we also know whether the current device is group owner or not. So basically every peer in the group knows the group owner’s IP address.

The remaining problem is sharing the port it is listening on. I couldn’t find any proper solution for this, I solved it by prefixing the port, then sharing the dynamic port data with clients, and then clients can share their own info and dynamic port. So first communication happens over the fixed port and after that, then dynamic port data number is transferred, and then it is regular socket communication. Check out my sample app for this.

Even this problem can be solved using the WiFi direct service discovery. Watch out for my next post on this topic.

One last thing to cover in this post is method called createGroup(). This method is for supporting the legacy devices with no WiFi direct hardware. This basically an access point and any device with WiFi capability can connect with it like a regular access point and share data like we did in my previous post. On top of that it can work as expected in WiFi direct scenario, so support both types of devices.

Sample app code available in github.

Happy coding!!!

Let me know what you think. Got any questions? shoot below in comments.

-Kaushal D (@drulabs twitter/github)

drulabs@gmail.com

Local networking using NSD (Network service discovery)

nsd_blog

In this post we will see how data can be transferred between two android devices in the same network. What follows is a bit of theory and implementation in android studio. There is a source code link at the bottom of the page for reference.

In LAN communication a server listens for client requests in one port (with a blocking connection) and client connects and communicates with server using the IP address and port information. This is basic socket communication in client-server architecture.

LAN communication have been around since ages, be it between laptops, desktops or other mobile devices. Socket communication is nothing new in the world on networking. However, there are mainly 2 problems associated with it:

  • Device(s) is/are not aware of the IP address of other device(s).
  • Device(s) do not know the port other device(s) is/are listening on.

The 2nd problem mentioned above is solvable by prefixing a port. Say port 35627 is fixed for our particular application, so once we know the IP address we know the port we have fixed for our application. But this has other side-effects. OS usually gets any free port when any application requests it or for its own purposes. If the fixed port is used by any other application and you try to use it, you will get a port already in use exception. To avoid this, the port must be requested dynamically from the OS, and OS will provide you any free port (hence no chances of port already in use exception).

And other device(s) must be aware of this port and IP information for communicating with it.

With NSD (Network service discovery), this very problem is addressed. Communication happens over socket like always, but NSD equips us with ways to share the IP and port information. Lets see what NSD is all about, and how to solve the mentioned problem using NSD.

Android NSD allows our app to identify devices in the same network that offer the services we are requesting. With NSD we can register, discover and connect with our service (or other services) over network. Service is basically some information that is advertised over network. Similar to a broadcast receiver and intent broadcast in android. Device 1 advertises some service say Multi-player chess and device 2 discovers it, connect with it and LAN chess use case solved.

Basic steps involved in NSD are depicted in below figure:

nsd1nsd2

As you can see from above, steps involved in NSD are as follows:

  • Application starts and registers a service over network (optional step, avoid if all you want is to discover a service).
  • App starts service discovery over network.
  • In callback method a service object is received when service is FOUND.
  • In callback method a service object is received when service is LOST.
  • resolve the found service and extract the port and IP information of the service advertiser.
  • Use socket to establish a connection and communicate.

Once port and IP information is retrieved by resolving the found service, the socket communication (yes the ye-old socket communication) is used to transfer data. Lets see these are done in code.

Registering your service

Android’s NsdManager class is used for this. simply call registerService.

NsdServiceInfo serviceInfo = new NsdServiceInfo();
serviceInfo.setPort(port);
serviceInfo.setServiceName(mServiceName);
serviceInfo.setServiceType(SERVICE_TYPE);
mNsdManager.registerService(
      serviceInfo, NsdManager.PROTOCOL_DNS_SD, mRegistrationListener);

An NsdServiceInfo object needs to be created and passed in register service method for registration. It holds the service information to be advertised. Service name can be any name you want your service to have, a simple string. Service type is also a string and needs to be in “_<protocol>._<transportLayer>” format. You can choose the type of service from IANA. For the demo I created I simply used an unregistered one “_localdash._tcp“.

Next parameter is protocol, please use the one mentioned in the code, I didn’t find any other protocol. DNS_SD stands for Domain Name System Service Discovery.

The registration listener passed as the last parameter in register service method is callback. It contains 4 methods related to registration. Registration listener’s callback methods are when registration is successful, or failed, or un-registration happens etc. Click on this link to know more.

Callback on Registration listener’s onServiceRegistered() method means service has been successfully advertised. Next is discovering the advertised services.

Discovering services

For discovering services using NSD we use the discoverServices() method (of course).

mNsdManager.discoverServices(
        SERVICE_TYPE, NsdManager.PROTOCOL_DNS_SD, mDiscoveryListener);

Here we are telling NsdManager which type of services to discover. The callback here is Discovery listener. This callback informs us when a service is found or lost. It has many callback methods, the one that is important to us is onServiceFound(). Click here to know about other callback methods of discovery listener. For every failure method there is a reason code in parameter. The service found here can be a different service than what we are expecting make sure to verify service type and service name before proceeding. The source for my demo app is shared on github, do check it out to know how to do it (github link). The NsdServiceInfo object in onServiceFound() method cannot be used yet. It needs to be resolved first.

Resolving discovered services

Services can be resolved via resolveService() method of NsdManager. Here is the code for it:

mNsdManager.resolveService(service, mResolveListener)

This is called in service found method of discovery listener (after confirming it is our service). The call back Resolve listener has two methods. on service resolved and on resolve failed (with error code). Once service resolved callback is received, we can use the service info object in its parameter to retrieve IP and port information. Here is how to do it:

NsdServiceInfo serviceInfo = //The service object in OnServiceResolved
String ipAddress = serviceInfo.getHost().getHostAddress();
int port = serviceInfo.getPort();
//Data sending service
DataSender.sendCurrentDeviceData(LocalDashNSD.this, ipAddress, port);

Here we got the service advertisers IP address and port information set by him (or it, the advertiser) when advertising the service. Now Bang!!! socket communication can happen. This happened without any mischievous stuff or hard coding or any magic numbers. Everything is real dynamic.

If you don’t know how to perform basic socket communication or you want to check out all the steps we discussed here check out my demo app code on github:

Happy coding!!!

Let me know what you think. Got any questions? shoot below in comments.

-Kaushal D (@drulabs twitter/github)

drulabs@gmail.com

Gradle power – Automatically generate Android App Version Code and Name

One recommended practice with Android app development is upgrading app version code and name in gradle file for every code change. We have been doing this manually all this time, there is an additional overhead (time) of gradle syncing for every change that happens in build gradle files. This trick automates that process using simple gradle methods and logic. Even if you forget to upgrade the version, this trick will take care of it.

It is not mandatory but I recommend going through the basics of product flavours in Android. We will be upgrading versions of default config and all the flavours. You can Google it up or check out this link.

Android apps version code and name are defined in the module’s build.gradle file. Same goes for all the product flavours. All are defined inside Android block. Here is a sample:

android {
...
    defaultConfig {
        applicationId "app.package.default"
        versionCode 1
        versionName '1.0'
        ...
    }

    productFlavors {
        free {
            applicationId "app.package.free"
            versionCode 101
            versionName '1.101.1'
            ...
        }
        pro {
           applicationId "app.package.pro"
           versionCode 301
           versionName '1.301.3'
           ...
        }
        freemium {
           applicationId "app.package.freemium"
           versionCode 501
           versionName '1.501.5'
           ...
        }
    }
...
}

FYI, version code and name can be overridden in flavours as shown above. In fact, any property of default config can be overriden in flavours. Both use the same DSL object for configuration.

As you can see, we have to manually write “1.0” and 1 in version name and code respectively. And when we change that, we need to wait for gradle sync to finish. Now let’s see how to automate this process.

We are going to create a method that calculates an upgraded number and we will use this method in version name and code. Like this:

versionCode getCustomNumber()
versionName "1.0." + getAnotherOrSameCustomNumber()

This method in defined in project level build.gradle file. You can use any logic in getCustomNumber() and getAnotherOrSameCustomNumber() that returns an upgraded number every time it is called. When you define this, make sure it returns an upgraded number as a lower version code will not upgrade your app. A simple logic based on timestamp is defined below.

def getCustomNumber() {
    def date = new Date()
    def formattedDate = date.format('yyMMddHHmm')
    def code = formattedDate.toInteger()
    return code
}

allprojects {
    version = '1.0.' + getCustomVersionCode()
    repositories {
        jcenter()
        mavenCentral()
    }
}

Defining the base version in allProjects block is important, or else you will see a null version code and name and may result in build failure. Here’s how we can use the above defined method in app level’s build.gradle file (including product flavours).

android {
    ...
    defaultConfig {
        applicationId "app.package.default"
        versionCode getCustomVersionCode()
        versionName '1.0.' + getCustomVersionCode()
        ...
    }

    productFlavors {
        free {
            applicationId "app.package.free"
            versionName '1.101.' + getCustomVersionCode()
            ...
        }
        pro {
            applicationId "app.package.pro"
            versionName '1.301.' + getCustomVersionCode()
            ...
        }
        freemium {
            applicationId "app.package.freemium"
            versionName '1.501.' + getCustomVersionCode()
            ...
        }
        ...
    }
}

You can even use the revision of the source control you use in your project, just need to find the right gradle plugin your source control.

If you remove the base version in allProject block of project level build.gradle file, you will see a “null” in version code in generated manifest. And your app may or may not run. So please avoid it.

Let me know what you think.

-Kaushal D (@drulabs twitter/github)

drulabs@gmail.com

Gradle power – android product flavours and configuration

gardle-android

Product Flavours have been around for quite some time now. I feel Product flavours are one of the coolest things about android studio. I am so smitten by product flavour that I decided to write a post about it :p. What follows will be introduction, application, grouping, filtering and configuration of product flavours in android. If you have already applied these and have any questions, shoot in comments below.

Product flavours are very useful when you want to create multiple versions of your app like demo, free, paid etc. You must have seen various versions of same app in google play like angry birds free and angry birds hd. You can have a single source code and generate n number of variants of your app. If you want to write common code that can be used in various types of apps, product flavours is not the way to go, what you are looking for is android library.

Let’s see how to create product flavours

android {...
    defaultConfig {...}
    productFlavors {
        flavour1 {...}
        flavour2 {...}
        flavour3 {...}
    }
    ...
    buildTypes{...}
}

As you can see, this skeleton goes in android block in build.gradle file (app/module level). By writing this we have created 3 flavours called flavour1, flavour2 and flavour3 (duh..). By default android has 2 debug types called debug and release (more can be added like jnidebug), a build variant or variant is a combination of build type and product flavour. So now there will be 6 build variants. flavour1-debug, flavour1-release, flavour2-debug, flavour2-release, flavour3-debug and flavour3-release. Inside these flavour blocks are flavour specific properties and methods like applicationId (the app package name), versionCode, buildConfigField etc. We will see these a little later. Click here to see ProductFlavour DSL (domain specific language) object that is used to configure flavour specific properties and methods.

We all know about default config. Whatever properties and methods that are defined in default config are inherited by all product flavours. Default config block is also uses ProductFlavour DSL (domain specific language) object. This means that everything that goes inside a default config block can go inside a flavour block. Each product flavour can override the properties and methods defined in default config block.

android {...
    defaultConfig {
        applicationId "the.default.packagename"
        minSdkVersion 8
        versionCode 10
    }
    productFlavors {
        flavour1 {
            applicationId "the.default.packagename.flavour1"
            minSdkVersion 15
        }
        flavour2 {
            applicationId "the.default.packagename.flavour2"
            versionCode 20
        }
        flavour3 {...}
    }
    ...
    buildTypes{...}
}

In the above example some properties are overwritten and rest are inherited from default config block. Click here to see list of all the properties and methods that can be configured inside default config and flavour block (or groovy closure). ApplicationId property is to assign different default package to each flavour. This makes sure that different variants of your app can be installed on the same device.

SourceSets for product flavours

This creates 6 source sets:

"src/flavour1" - android.sourceSets.flavour1
"src/flavour2" - android.sourceSets.flavour2
"src/flavour3" - android.sourceSets.flavour3
"src/androidTestFlavour1" - android.sourceSets.androidTestFlavour1
"src/androidTestFlavour2" - android.sourceSets.androidTestFlavour2
"src/androidTestFlavour3" - android.sourceSets.androidTestFlavour3

// All these folder follow the java/src/main/.. structure 
// for any flavour and test specific customization. click 
// below link to know more about android sourceSets.
// http://goo.gl/NvAg74

This flavour specific folder structure is where the flavour specific code resides. Like flavour1 customization must be done inside “src/flavour1/ {java, res, assets}” folder. The common code lives in “src/main/…” directory. If you want to change the app name for say flavour2, all you need to do is define app_name (or whichever string resource you are using as app name in manifest) in “src/flavour2/res/values/strings.xml“. Just whatever you want to override, do not copy the entire xml or res file in flavour specific folder, let android resource merger do its job. Say you are using “@drawable/ico_app or @mipmap/ico_app” as app icon, this can be easily configured for each flavour by keeping flavour specific icons in respective folder structure. e.g. for flavour3, just name the flavour3 specific icon as ico_app and keep it in drawable or mipmap folder (whichever you are using) in flavour3 specific directory.

Multi-flavour variants

Now this is useful when the variants of your app is decided by one dimension. e.g. say the dimension is price, you can create flavours like free, paid, freemium etc. what if the requirement is to create variants based on multiple dimensions, like for environment dimension there can be three flavours “dev“, “staging” and “production“, and three for price dimension “free“, “freemium” and “paid“, and may be another dimension. In this case you can select either of the six flavours plus debug or release. But the product flavour we are interested in is dependent on both dimensions. Something like free-dev-debug, paid-production-release etc. Here we are trying to group product flavours, which is not allowed by default. This can be enabled via dimension attribute of product flavours. We can set dimensions of product flavours via flavorDimensions attribute of android block, and then we can assign a dimension to each flavour. Here is an example:

android {...
    flavorDimensions "country", "price"
    productFlavors {
        free {dimension "type"...}
        pro {dimension "type"...}
        India {dimension "country"...}
        China {dimension "country"...}
        Russia {dimension "country"...}
    }
    ...
    buildTypes{...}
}

In this example as we can see there are two flavour dimensions, country and price. Country flavour dimension has three flavours India, China and Russia, price has two flavours free and pro. This in turn creates 12 build variants for us:

India-free-debug
India-free-release
India-pro-debug
India-pro-release
China-free-debug
China-free-release
China-pro-debug
China-pro-release
Russia-free-debug
Russia-free-release
Russia-pro-debug
Russia-pro-release

As you can see just by defining the flavorDimensions and dimension attribute, the product flavours can be grouped, and android creates variants that are all possible combinations of flavours of all types of dimensions and build types. These variants are reflected everywhere, including the build variants tab on lower left side of android studio.

Filtering product flavours

Now look at the list of build variants above. What if we want to filter this list. Say I don’t want Russia-free and India-paid variant for some reason. This is possible by ignoring some of the build variants based on some condition. Below is an example of ignoring Russia-free variant.

//Filtering variants
android {
...
    variantFilter { variant ->
        def names = variant.flavors*.name
        def buildTypeName = variant.buildType.name
        // if buildtype is required for filtering use
        // the above field
        if (names.contains("Russia") && names.contains("free")) {
            variant.ignore = true
            // or variant.setIgnore(true)
        }
    }
    ...
}

The build variants can be ignored by setting the ignore field or by setting setIgnore to false. This global filtering is reflected everywhere including the build variants tab on lower left side of android studio, assemble, install tasks etc.

Flavour specific dependency

You must have seen testCompile “junit…” in your projects. This is flavour specific dependency. junit for example is a testing library and must not be shipped with the release apk (why increase the size of your apk with something that is required only for testing). Adding “flavourCompile” syntax dependency section of app level build.gradle adds the dependency for that particular flavour. If you are familiar with facebook’s stetho library, then you know it is only for development purpose and must not be shipped with release version. This is an awesome feature that comes with android product flavours. Some examples:

testCompile 'junit:junit:4.12'
stethoFlavourCompile 'com.facebook.stetho:stetho:1.3.1'

There are a couple of things to discuss before we can conclude product flavours.

BuildConfig constants and res values

Now we know that flavour specific code goes in flavour specific sourceSet, but sometimes we need flavour specific code in main code base (“src/main/…“). The problem here is: the main source code doesn’t know which flavour is or will be getting generated. For this there is something called BuildConfig. It is an auto generated file and must not be tempered with. This file contains flavours specific constants and can be used in main source code. Check out  ProductFlavour DSL object for available properties and methods. You can set flavour specific constants and resources like this:

android {...
    flavorDimensions "country", "price"
    productFlavors {
        free {dimension "type"
            buildConfigField("String", "featureList", "restricted")
            resValue("boolean", "ads", "true")
        ...}
        pro {dimension "type"
            buildConfigField("String", "featureList", "all")
            resValue("boolean", "ads", "false")
        ...}
       India { applicationId "my.app.india"
            dimension "country"
            buildConfigField("String", "shortcode", "IN")
       ...}
       China { applicationId "my.app.china"
            dimension "country"
            buildConfigField("String", "shortcode", "CHN")
       ...}
       Russia { applicationId "my.app.russia"
            dimension "country"
            buildConfigField("String", "shortcode", "RU")
       ...}
    }
    ...
    buildTypes{...}
}

This is then used to generate BuildConfig class and dynamic resources (check our resources in generated folder). For a Russian-pro-debug build variant (from flavour definition above), the generated build config will look something this:

//Build config for Russia-pro-debug build variant
public final class BuildConfig {
    public static final boolean DEBUG = Boolean.parseBoolean("true");
    public static final String APPLICATION_ID = "my.app.russia";
    public static final String BUILD_TYPE = "debug";
    public static final String FLAVOR = "proRussia";
    public static final int VERSION_CODE = 1;
    public static final String VERSION_NAME = "1.0";
    public static final String FLAVOR_price = "pro";
    public static final String FLAVOR_country = "Russia";
    // Fields from product flavor: pro
    public static final String featureList = "all";
    // Fields from product flavor: Russia
    public static final String shortcode = "RU";
}

As you can see the constants are specific to a selected flavour. This can then be used in the main source. Easy peasy isn’t it? Build config file is in this location (“<app or module>\build\generated\source\buildConfig\<variant>\<debug or release>\my\app\package“).

Dynamic manifest

Sometimes we need to use app’s package in manifest file. But with product flavours the app’s package or application id is no longer fixed. In this case we can make the package dynamic in manifest with groovy syntax. By doing so, the application Id is picked from build.gradle at run time. Here is a sample:

<manifest xmlns:android="http://schemas.android.com/apk/res/android......
...
<permission 
    android:name="${applicationId}.permission.C2D_MESSAGE"
    android:protectionLevel="signature" />
...
<application...

This is an example for a declared permission for android GCM. Notice the groovy syntax around applicationId, that is where the magic is happening.

Let me know what you think. Happy coding!!!

Have questions? Did I miss anything? Shoot below in comments.

-Kaushal D (@drulabs twitter/github)

drulabs@gmail.com

Gradle power – changing default apk name via script plugin

Android studio is the IDE for android application development which uses gradle for build automation. Yet android developers (including me) are not aware of the power gradle packs with it. The level of customization that gradle offers is amazing. This is the first post of “power of gradle in android studio” from my blog, I hope it helps you become a better android developer.

This post is not about gradle basics, it is about using gradle to suit our needs. I recommend taking a look at this presentation for understanding just enough gradle to get started, it is optional though if all you want is to change the default apk name.

There are many ways to change that default apk name that goes something like “app-flavor1-flavor2…-debug.apk“. One way is to create a gradle task, another is to loop through all the build variants etc. In this post we will see how to use a script plugin to change that name.

There are basically two types of plugins in gradle, script plugin and binary plugin. I am sure you have seen binary plugins. Whenever we use apply plugin syntax we are applying binary plugin. for example this is how android plugin is applied in android applications: apply plugin: “com.android.application”. Script plugins are separate gradle files that are included in the application using the syntax: apply from “../newgradlefile.gradle”.

Let’s see how we can create a script plugin for changing the default apk name in android applications. Below is the content of a gradle file that I named apknomenclature.gradle, this file in the root directory of android project:

android.applicationVariants.all { variant ->;
     def appName
     //Check if an applicationName property is supplied;
     //if not use the name of the parent folder name.
     if (project.hasProperty("applicationName")) {
          appName = applicationName
     } else {
          appName = parent.name
     }
    variant.outputs.each { output ->;
         def newApkName
         if (output.zipAlign) {
              newApkName = "${appName}-${variant.name}.apk"
              output.outputFile = new File(output.outputFile.parent, newApkName)
         }
    }
}

This is the content of apknomenclature.gradle file in the root folder of my android application. What this is basically doing is looping through all the build variants of android application and changing the output file. It checks if there is any property called applicationName in gradle.properties file (applicationName=MyCustomAppName). If it is there, it takes that name or else picks the root folder’s name and renames the apk as per logic defined. This change doesn’t change anything except the name of the apk, you can run and debug your application like you did earlier.

The zipAlign if condition is only to change the name of signed and aligned apk not the unaligned one. We don’t need to bother about unaligned apks anyway.

Notice the variable called newApkName, you can add your custom logic to change this variable’s value, which will be reflected in resulting apk name. You can add timestamp or version code of your app, but I wouldn’t recommend using that in the apk name as the name is cached in android studio. So if you change version code very frequently in your app you will notice this error “apk “your_apk_name_timeinmillis_…” doen’t exist…“. So just keep the name in the code above or use something that doesn’t change frequently.

Now that our apknomenclature.gradle file or our script plugin is ready, we need to apply it to app level build gradle file like this “apply from “../apknomenclature.gralde”“.

This will change the apk name in the outputs directory. If you run into any issues, just comment the apply from line of code run the app, then uncomment it and run again.

I know its not much to change the apk name, but its a start. Watch out this space for more.

Let me know what you think. Happy coding!!!

Have questions? Did I miss anything? Shoot below in comments

-Kaushal D (@drulabs twitter/github)

drulabs@gmail.com

 

An introduction to android data binding library (I)

Android-Logo-PNG-05073

Data binding library is useful for creating declarative layouts and minimizing the glue code required to bind application logic and layouts. Data binding support library offers both flexibility and broad compatibility, and can be used on all android platform versions starting 2.1+ (API 7 onwards). This library is still in beta and has some caveats but developers are free to use it for production.

Developer’s guide for Data Binding.

Write apps faster with data binding library. The entire intent behind data binding library is to write app faster. Watch this video by google devs for more info.

This is exclusively for developers, the end user of your app will not see any difference whatsoever.

Introduction:

Binding data with UI is a very tedious task in Android. No matter what you are building to update even one UI element, you have to find it by id and then cast it into the widget you are using, then and only then, it can be updated. There was no proper way of binding data. With the announcement of data binding library in Google IO 2015, there is a better way. With this library, developers can write apps faster. It is support library and is supported all the way back to API version 7 (or Android 2.1). Check out the source code and snapshot attached with this article.

What is Possible with Data Binding Library:

With Data binding Library, you can:

  • Write apps faster
  • Remove “find view by id” and casting into widgets from your code
  • Minimize glue code required to bind your UI and data
  • Define custom attributes for XML layouts
  • Bind custom variables and events
  • Use observable fields so that UI gets updated automatically when state of a bind object changes (not covered in this article)

This is just a subset of what is possible. You can do a lot more. I will post more in upcoming articles.

Setting Up Android Studio:

Before you begin exploring and developing with data binding library, a few steps are needed to set up your Android Studio IDE (must be 1.3 or higher and Gradle plugin 1.5 or higher). Set up your module’s gradle file:

android {
....
    dataBinding {
       enabled = true
    }
}
enable data binding

Add this dependency in your project’s gradle file:

classpath "com.android.databindig:databinder:1.0.-rc1"

And that’s it. Your Android studio is now ready for using data binding library.

If you are creating a library project using Android data binding library, the project using your library must enable data binding in modules gradle file (as explained above, data binder dependency is not required). Don’t forget to mention this in your library’s release notes.

Declaring a Data Bind Layout:

This is how we declare regular or normal layouts in Android:

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto">
    <TextView android:id="@+id/txt_version_name"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"......

For writing Data bind layouts, the root element of layout xml must be changed to <layout> tag. Optionally, include a data tag if a variable is to be used for populating UI elements or an import statement is required. Here is an example.

As you can see from the above example, with just a little extra code, you can easily write data bind layouts or migrate your old layouts. No reflection hacks are used for this. All the processing is done in compile time. The <layout> tag tells the XMLLayoutProcessor that this is a data bind layout. The <data> tag is used to define a variable that is to be used in populating UI; and to import custom or other classes. For example, if View.VISIBLE property is used in data bind expression (as shown below), View class (android.view.View) must be imported before using it.

Some Data Binding Expressions:

In the earlier section, I explained what is possible with data binding library. Here is how you can implement those:

  • Bind custom variables:
    • Described in previous section is custom variable (vItem)
    • Apart from that, you can bind arrays, lists, primitive types, sparse lists, maps and more
    • All declaration and imports are inside <data> tag
  • Move basic UI logic to XML layout files:
    android:text="@{vItem.name.substring(1)}" //substring in xml
    android:text='@{vItem.code + "(" + vItem.versionNum+")"}' //String manipulation
    android:visibility="@{vItem.even ? View.VISIBLE : View.INVISIBLE}"
  • Define custom attributes:

There is no such custom attribute. You can bind it with custom method definition, e.g.:

@BindingAdapter("bind:imageURL")
public static void loadImage(ImageView img, String url) {
    Picasso.with(img.getContext()).load(url).into(img);
}

Cool, isn’t it. You can imagine the possibilities with this. One awesome example is setting text font. Increases readability like 2 folds.

app:textFont="@{'awesome-font.ttf'}"

@BindingAdapter("bind:textFont")
public static void setFont(TextView tv, String fontName) {
    String fontPath = "/assets/fonts/"+fontName;
    // set font in text view here
}

Demo App:

Below is a screen shot of a sample app

Screenshot_2016-01-22-11-17-44

 

 

You can download sample application from here. (Make sure you have followed the steps properly)

Let me know what you think. Happy coding!!!

Have questions? Did I miss anything? Shoot below in comments

-Kaushal D (@drulabs twitter/github)

drulabs@gmail.com